Validate Gitlab .gitlab-ci.yml one-liner

Who does not fight everyday to follow the least surprise/astonishment principle? I know I do and my last issue was related with Gitlab-CI.

I had to wait after each git push to discover if my .gitlab-ci.yml file was valid or not.

As usual, automation is the answer. Wouldn't it be awesome if we could run this:


And then edit our .gitlab-ci.yml until it's valid? That would be really awesome right?

Sadly at the time of writing Gitlab-CI documentation and API requires us to send our YAML file in stringified JSON format inside a content object to /api/v4/ci/lint api endpoint. Yep, that's a lot of hard work for such a simple task.

Fortunately we can leverage jq.node (it's like jq but WAY MORE powerful) for that along with watchexec!

If you do not have jq.node and watchexec installed it's never too late:

npm i jq.node -g
brew install watchexec

With these two I was able to write the following helpers (don't forget to add them inside your ~/.zshrc or equivalent):

function gitlab-ci-validate(){
  DATA=$(jq.node -r js-yaml -x 'jsYaml.safeLoad | thru(x => (JSON.stringify({content: JSON.stringify(x)})))' < .gitlab-ci.yml)
  curl -s --header "Content-Type: application/json" https://gitlab.com/api/v4/ci/lint --data $DATA | jq.node

function gitlab-ci-validate-watch(){
  watchexec --watch $(pwd)/.gitlab-ci.yml 'zsh -c "source ~/.zshrc && gitlab-ci-validate"'

gitlab-ci-validate validates .gitlab-ci.yml file from the current working directory using gitlab.com (it will also work with self-hosted gitlab instances) and gitlab-ci-validate-watch runs gitlab-ci-validate every time I save .gitlab-ci.yaml.

  "status": "invalid",
  "errors": [
    "jobs:update:db:script config should be a string or an array of strings"

For extra sweetness, we might want to run gitlab-ci-validate before each git push using git pre-push hook.


docker-compose watch (a-k-a docker-compose reload) one-liner

Even if I often saw docker-compose misused...

... I do find it sometimes useful when I develop locally. But I can't bear the cmd+tab + ctrl+c + up + enter each time I want to reload my containers because some configuration file changed. I'm not the only one, an issue exists on docker-compose project since 2014 (Watch code and automatically rebuild when something changes).

So here is a one-liner that works and restart docker-compose each time a *.yml, *.toml or *.conf file change:

watchexec --restart --exts "yml,toml,conf" --watch . "docker-compose up"

I used watchexec (rust) but you could definitely use something else like nodemon (nodejs).

And if you wish to restart docker-compose each time files from a specific folder are updated (e.g. api/), --filter is what you are looking for:

watchexec --restart --filter "$(pwd)/api/*" --watch . "docker-compose up"

For extra sweetness — because who wants to remember this one-liner forever? — I put the function below in my ~/.zshrc

function docker-compose-watch() {
  local args;
  if [[ $1 == "help" ]] || [[ $1 = "--help" ]]; then
    watchexec --help | grep -A 3 "OPTIONS:";
    args='--filter "*/docker-compose.yml"' && [[ $1 ]] && args=$@;
  eval watchexec --restart $args -w $(pwd) "docker-compose up"

alias docker-compose-reload=docker-compose-watch;


docker-compose-watch --help
    -e, --exts         Comma-separated list of file extensions to watch (js,css,html)
    -f, --filter ...      Ignore all modifications except those matching the pattern
    -i, --ignore ...      Ignore modifications to paths matching the pattern
docker-compose-watch -e '*.js' -i './api'
Starting api_worker_1
Starting api_postgrest_1
Attaching to api_worker_1, api_postgrest_1
[... updating a file ...]
Gracefully stopping... (press Ctrl+C again to force)
Stopping api_postgrest_1 ... done
Stopping api_worker_1 ... done
Starting api_worker_1
Starting api_postgrest_1

Note: I definitely prefer to restart docker-compose up after each file change (with soft shutdown) than have to first remember to run docker-compose up and then run watchexec ... "docker-compose restart" and finally ctrl+c + docker-compose down.


Merging RedisWeekly into RedisWatch

I'm copy/pasting here the email sent to RedisWeekly subscribers for archiving purpose.

What a long way since the first issue of RedisWeekly the 16th of August 2013!

As you may have seen from the past weeks, I was not able to send you weekly doses of Redis news! In this regard I decided to leave RedisWeekly in the good hands of Redis Watch curator Itamar Haber from Redis Labs and I would highly recommend you to stick with Redis Watch.

If you do not unsubscribe from this mailing list before the 9th of April you will be automatically subscribed to Redis Watch.

As for me I will continue to share and spread what I find interesting on my Twitter account and hope I will meet some of you at some tech conference!

Thank you for all these years of your trust in RedisWeekly.


Continuous Deployment with Gitlab, Gitlab-CI and CleverCloud

Recent events aside, Gitlab and Gitlab-CI is a great integrated forge for software development. I recently decided to migrate Image-Charts on it as well as the soon-to-be-announced-new-SaaS from the old Bitbucket, Jenkins workflow used at Redsmin and Bringr.

As a side note, JenkinsFile never grew up on me, I never liked it, it's too verbose and I definitely prefer the configuration approach (limited feature-set) over the code (unlimited but can quickly get messy) approach.

I first tried to setup a private deploy SSH key as an environment variable and then inject it using GIT_SSH_COMMAND and then hack the known_hosts file to fix the sadly well-known Host key verification failed issue aaaand don't forget to chmod 400! Phew, that's a lot of work for something that should be easy to do. Thankfully there is a simpler way!

You will first need to install clever-tools locally (if you did not already). We could do the following steps without it but doing the oAuth dance through the API is not as easy as using Clever CLI.

npm i clever-tools -g

Then login:

clever login

And now the good part:

cat ~/.cleverrc
{"token":"7ea753c8cb23000000000000000","secret":"02000000700000000047000200"} // copy token and secret value

Open the Gitlab-CI project CI/CD settings, add CLEVER_TOKEN and CLEVER_SECRET environment variables with the values you just copied.

Finally edit your project .gitlab-ci.yml like so:

  image: node:6-wheezy
  stage: deploy
    - /master/
    - git remote add clever https://$CLEVER_TOKEN:$CLEVER_SECRET@push-par-clevercloud-customers.services.clever-cloud.com/app_YOUR_APPLICATION_ID.git
    - git push --verbose --force clever master 2>&1 | grep -e 'remote:' -e '->'

Let's take it step by step:

  • deploy:clevercloud: the job name, could be deploy or whatever you want
  • image: node:6-wheezy: I used this docker image on the previous steps because the app is in NodeJS you can use any docker image you want as long as it has git installed
  • stage: deploy: gitlab-ci pipeline stage.
  • only: - /master/: restrict this job to the master branch.
  • git remote add clever ...: we first need to add CleverCloud remote git repository to the build local git repository.
  • ... https://$CLEVER_TOKEN:$CLEVER_SECRET@push-par-clevercloud-customers.services.clever-cloud.com/...: this is where the magic happens, instead of using git+ssh we are using https transport, the authentication is through basic auth token:secret and thus we don't need to setup a private ssh key.
  • ... clevercloud-customers.services.clever-cloud.com/app_YOUR_APPLICATION_ID.git ...: don't forget to set your APPLICATION_ID.
  • ... git push --force clever master ... I always use --force in CD pipelines, I don't want anyone else to bypass the CD pipeline. It's often a source of longer outage when tests are bypassed to fix directly the production environment.
  • ... 2>&1 | grep -e 'remote:' -e '->' ... this part is very important, without it token:secret will leak into job logs and even emails in case of job failure.

That's it! It only took 2 lines in a Gitlab-CI job to automatically deploy your project on CleverCloud.

Deploying to CleverCloud is only one side of the story, I hope to share later the Gitlab-CI pipeline I use to deploy the soon-to-be-announced-new-SaaS with Kubernetes on Google Container Engine.

« »
Made with on a hot august night from an airplane the 19th of March 2017.