6/03/2020

FinOps - Reducing Google Cloud Storage costs

đŸ€Ż Inter-region network transfer can be a real PITA.

My latest Google-Cloud Invoice for my Image-Charts Saas was ~40% related with inter-region transfer. 💾💾

 - Why ? 🧐

 - I'm glad you asked 😍!

GCP billing report confirms that 40% came from Google Cloud Storage.

Drilling down I saw that the main costs were related with GCP "Storage egress between NA and EU" and "GCP Storage egress between EU and APAC".

Storage/sent bytes per location graph confirms it, I've sent more than 3TB of data from EU to Asia (APAC) & USA (NA) clusters.

WHYYYY 😭?

Because docker images 🐳 are stored on GCP EU (eu.gcr.io). And are downloaded nearly at every Kubernetes node auto-scaling-up steps. And Image-Charts scales. Like a lot.

I've updated @imagecharts continuous delivery pipeline yesterday to push images to the 3 locations (EU+Asia+USA) instead of one (EU) and I already see improvements đŸ‘đŸ”„

Conclusion: 20x Google Cloud Storage cost reduction!

5/19/2020

How to automatically activate PostgreSQL Row Level Security on tables with at least one policy attached

Row level security is an awesome feature that let you control how your database (PostgreSQL in my case) manage access to each row of a table based on some policies declared upfront. It's also really useful when you expose your database through a REST API with a gateway like PostgREST. I already talked enough about that :).

I often forgot to add the ALTER TABLE schema.table ENABLE ROW LEVEL SECURITY; statement when I declare row level security policies. Do you?

Let's use our database awesome introspection feature to list tables for which we attached access policies and then automatically activate row level security. The SQL request below list tables and display whether or not row level security is activated.

select pg_class.oid,
       pg_namespace.nspname || '.' || pg_class.relname as schema_table,
       pg_policy.polname as policy_name,
       pg_class.relrowsecurity as has_row_level_security_enabled
from pg_catalog.pg_policy
       inner join pg_catalog.pg_class on pg_class.oid = pg_policy.polrelid
       inner join pg_catalog.pg_namespace on pg_class.relnamespace = pg_namespace.oid;
oid schema_table policy_name has_row_level_security_enabled
369657 fsm.machine fsm_machines_access_policy true
369745 iam.user user_access_policy true
369803 actor.company company_access_policy true
369803 actor.company company_access_policy_for_update true
369842 contract_manager.contract contract_access_policy false

From the query output we observe that contract_manager.contract table does have an associated access policy called contract_access_policy but without row level security enabled on the table.

Let's now enable row level security for each table where at least one policy was defined:

update pg_catalog.pg_class
set relrowsecurity = true
where pg_class.oid in (select pg_class.oid
      from pg_catalog.pg_policy
      inner join pg_catalog.pg_class on pg_class.oid = pg_policy.polrelid
      where pg_class.relrowsecurity = false);

This is it! We've activated row level security (not in FORCE mode) to every table with attached policies. No more mistakes. No more boilerplate \o/

4/09/2020

PostgREST "response.headers guc must be a JSON array composed of objects with a single key and a string value" error

Yep. You are wondering why it's working locally and not in production right? Or maybe why PostgREST login feature is not working at all?

The response.headers guc must be a JSON array composed of objects with a single key and a string value is often related to the fact that some call were made to set a header field but the value to be set was empty. Take a look at your settings.secrets table. If it's empty, that's the problem.

At least on the PostgREST-starter-kit, the settings.secrets table should contains at least 2 rows:

jwt_secretyour_secret_here
jwt_lifetime 3600

Looking for some guidance on PostgREST? How to setup a CI/CD with it? What test strategy to use? As a CTO I've built multiple SaaS with it underneath and now help tech teams build fast, reliable and safer API with PostgREST and PostgreSQL. Hire me on CodeMentor or Malt!

3/03/2020

How to expose all public stored function in PostgREST/SubZero

DO $$
    DECLARE
        schema_name INFORMATION_SCHEMA.routines.routine_schema%TYPE = 'api';
        fns CURSOR FOR
            select routine_name from INFORMATION_SCHEMA.routines WHERE routine_schema = schema_name;
    BEGIN
        FOR fn_record IN fns LOOP
                EXECUTE 'grant execute on function ' || schema_name || '.' || fn_record.routine_name || ' to anonymous;';
            END LOOP;
    END$$;

Will expose all store functions from the api public schema in the generated swagger/openapi specification from PostgREST/SubZero. Of course, ensure that all underneath private tables have row-level-security enabled to stay secure.

How to expose all public views in PostgREST/SubZero

DO $$
    DECLARE
        schema_name INFORMATION_SCHEMA.views.table_schema%TYPE = 'api';
        views CURSOR FOR select table_name from INFORMATION_SCHEMA.views WHERE table_schema = schema_name;
    BEGIN
        FOR view_record IN views LOOP
                EXECUTE 'grant select, insert, update, delete on ' || schema_name || '.' || view_record.table_name || ' to anonymous;';
            END LOOP;
    END$$;

Will expose all views from the api public schema in the generated swagger/openapi specification from PostgREST/SubZero. Of course, ensure that all underneath private tables have row-level-security enabled.

2/26/2020

Validate an openapi or swagger API definition from a Gitlab-CI test step

Lets say you've built $BUILD_IMAGE container image at the build step. I did it on a NodeJS based project but it will work with other technology as well.

check-openapi-contract:
  stage: test
  retry: 1
  timeout: 15m
  script:
    - docker run --name=my-container -d -i -p 8080:8080 --rm $BUILD_IMAGE npm start
    - bash -c 'while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' localhost:8080/swagger.json)" != "200" ]]; do sleep 5; done'
    - docker exec -i my-container curl http://localhost:8080/swagger.json -o ./swagger.json
    - docker exec -i my-container npx swagger-cli validate ./swagger.json

So what do we do? We start the server, then retrieve the swagger.json or openapi.json and leverage swagger-cli validate command to ensure our definition is valid and be notified if it is not. Nothing. More.

« »
 
 
Made with on a hot august night from an airplane the 19th of March 2017.