Immutable Single Page Application – wrap frontend with docker and reuse it on many environments

Problem overview

Docker conquer the world – the basic questions it how to place frontend application inside container? In docker world we want to build artifact (image) once and reuse it on many environments. Such immutability of images gives us some guarantees regarding consistency of environments so our staging can be as much as possible similar to production – that’s good for stability of our system because we can be sure that code deployed on staging will behave the same on production. So to be able to run one artifact in different environments we usually extract whole env-specific configuration to environment variables or some ConfigMaps which are mounted in runtime as config files and contains information required to run on particular environment e.g. db connection, secrets, public domain name etc.

In case of backend services it’s really simple as they can read environment variables in runtime which are created by DevOps team but what about frontend? Here situation is not so trivial as usually we can pass environment variables to frontend only in build time and those variables are just placed as a strings in static js files (e.g in React). It means that we need separate build pipeline of frontend for different environments like frontend-development, frontend-staging, frontend-production, frontend-client1, etc. So in such situation we must have many different artifacts of one codebase what is not really good situation – please visit site if you want to know more.

Implementation of solution

We need possibility to configure frontend in runtime – it means before starting static files hosting we must configure frontend files so they can work well with environment on which they are hosted at the moment. When using docker that’s really easy – full source code explained in this article can be found there:

Let’s say that we have simple SPA application which is divided into index.html file as entrypoint and script.js where whole site is located. Common case is to inject URL to backend API which can be located in different places in different situations, e.g.:

  • locally I want to use http://localhost:port as API URL
  • maybe I want to test frontend on mobile which is in the same LAN as my PC so I want to inject http://my_pc_ip:port as API URL
  • maybe I want to create Review Apps so my API URL will look like
  • on normal envs I just want to use

As example we can use such simple html file which just make place to show API URL and include our dummy script:

<!DOCTYPE html>
<html lang="en">
    <meta charset="UTF-8">
<div >I'm gonna make shots to <span id="foo" style="color: red"> </span> </div>
<script src="script.js"> </script>

Source code of script.js

window.onload = function (ev) {
    var element = document.getElementById("foo")
    element.innerHTML = ? //of course that won't work - we must think what should we place here

Dockerfile would look like:

FROM nginx:1.15.7

ADD index.html /usr/share/nginx/html/index.html

ADD script.js /usr/share/nginx/html/script.js

CMD nginx -g "daemon off;"

So we must consider how to pass environment variable to static files when container is starting. The simplest solution is to use such trick in CMD section of Dockerfile:

CMD sed -i "0,/#API_URL_TOKEN#/{s@#API_URL_TOKEN#@${API_URL}@}" /usr/share/nginx/html/index.html && nginx -g "daemon off;"

This sed command look for the first (0,) occurrence of #API_URL_TOKEN# in index.html file and then replace it with API_URL environment variable. Usage of “@” as delimiter in sed command is very important as when we will use standard “/” we have conflict with protocol part of url (https://). After such configuration nginx is starting.

Then in section of index.html we should add following script:

        apiUrl = "#API_URL_TOKEN#"
        if (apiUrl === "#API_URL_TOKEN#") {
            apiUrl = "localhost"

So just place global variable with apiUrl and token as value. As we replace only first occurrence we can write condition: if replace not happened just set defaults for local development.

Now we can just place

element.innerHTML = apiUrl

and use following commands to build and test our POC

docker build -t jakubbujny/immutable-spa .
docker run -p 80:80 -e API_URL= -it jakubbujny/immutable-spa

Those commands just build docker image and run it with API_URL env variable which will be injected into index.html file. Effect:
Screenshot from 2018-12-26 12-47-21

Dedicated to O.S. 😉

GitOps – declarative Continuous Deployment on Kubernetes in simple steps


In this article I’m going to show you very basic and simple example which describes GitOps concept – new pattern in DevOps world which helps to keep infrastructure stable and more recoverable. By using git as operations tool we can see the whole history of changes and we are able to say who and when changed something. As example I want to show you really simple Continuous Deployment pipeline of docker images on Kubernetes cluster.

Main assumptions are:

  • Git repository is single source of truth about state of the system
  • If state change in repository it will be automatically synchronized

Proposed architecture:

Article - gitops cd

Deployment design

Let’s say that we have small company with 3-4 projects managed by single DevOps team on the same infrastructure and Kubernetes cluster. Every project is based on microservices architecture with 4-5 services. Every project should have at least 2-3 environments:

  • Development – with CD pipeline from development branch of every microservice
  • Staging/Production – stable environment where microservices are deployed only on-demand when management says that version is stable, well tested and ready to handle clients. In our example from implementation point of view staging and production are the same.

Docker images

To keep simple mapping between the source code and artifact which will be deployed on some environment, docker images should be tagged using git commit hash – that’s ID which allows us to simply identify which code is deployed so in case of debugging some complex environments that’s really helpful. We can achieve that easily in CI pipeline which produce docker images by using script like:

HEAD=$(git rev-parse HEAD)

docker build -t${HEAD} .

docker push${HEAD}

Environments state repository

Heart of GitOps concept is state repository which should be source of truth for whole system. It means that in case of failure we should be able to recover the system using configuration stored in git repository. In our example we will use simpler concept – store only git hash of particular microservice what will represent version of application deployed on particular environment. It should allow us to make our infrastructure really reproducible but not on 100% as deployment configuration like environment variables can change behavior of the system. It means that in real GitOps pattern we should also store in such repository configuration of deployment like kubernetes yaml files, infrastructure descriptors like terraform and ansible files etc. It’s better but much more complicated as changes pushed to repository should be automatically synchronized by applying the diff.

Repository structure

Screenshot from 2018-12-23 12-26-08

So every deployment has own directory where name of directory is also name of the namespace in Kubernetes cluster. Every microservice in particular deployment has one file where git commit hash is stored. There is also one special file images_path which contains docker image path. Docker image name with tag will be constructed using following pattern:

content of images_path file (e.g. + deployment file name (e.g. authentication) + “:” + content of deployment file (e.g. 36a03b3f4c8ba4fc0bdcc529450e557ae08c12f2)


There is also one special directory sync-agent which will be described in next paragraph.

Sync agent

To avoid making manual updates on environment it’s required to have some synchronization agent which will read state of repository and apply changes in deployment. To achieve that we will use simple CronJob on Kubernetes which will run periodically and use kubectl to update image of particular deployment. Sync agent will be created per Kubernetes namespace to improve isolation between environments. So at first we must create proper RBAC permissions to allow our job to update deployments.

apiVersion: v1
kind: ServiceAccount
  name: sync-agent
  namespace: project1-development

kind: Role
  name: sync-agent
  namespace: project1-development
- apiGroups: ["extensions"]
  resources: ["deployments"]
  verbs: ["get","list","patch","update"]

kind: RoleBinding
  name: sync-agent
  namespace: project1-development
  kind: Role
  name: sync-agent
- kind: ServiceAccount
  name: sync-agent
  namespace: project1-development

In above configuration we create service account for sync-agent and give to it permissions to manipulate deployments within his namespace.

Next important thing is to create secret with deploy key – such deploy key should be configured to have read-only access to state repository so sync agent can clone the repo and read deployment files.

apiVersion: v1
kind: Secret
  name: sync-agent-deploy-key
  namespace: project1-development
type: Opaque

Final part is to create sync-agent CronJob which will run every minute and make synchronization:

apiVersion: batch/v1beta1
kind: CronJob
  name: sync-agent
  namespace: project1-development
  schedule: "* * * * *"
  concurrencyPolicy: Forbid
  startingDeadlineSeconds: 30
          serviceAccountName: sync-agent
          - name: sync-agent
            image: lachlanevenson/k8s-kubectl:v1.10.3
            - /bin/sh
            - -c
            - apk update &&
              apk add git &&
              apk add openssh &&
              mkdir -p /root/.ssh && cp /root/key/id_rsa /root/.ssh/id_rsa &&
              chmod 0600 /root/.ssh/id_rsa &&
              ssh-keyscan -t rsa >> /root/.ssh/known_hosts &&
              mkdir -p /deployment && cd /deployment &&
              git clone &&
              cd environments-state/sync-agent &&
              NAMESPACE=project1-development ./
            - name: sync-agent-deploy-key
              mountPath: /root/key/
          - name: sync-aginfrastructure ent-deploy-key
              secretName: sync-agent-deploy-key
          restartPolicy: Never

Args and command section are a little hacked as we just use public image with kubectl we must install additional things at beginning of the container to keep example simple – probably in real usage we should create docker image with pre-installed tools and push it to some private registry.

Comments to lines:
7 – run job every minute
22-24 – install required tools
25-26 – ssh private key is mounted as file but we must copy it to proper place and set proper chmod to be able to use it
27 – add to known hosts file so git will be able to clone repo
29-31 – clone repository and cd to sync-agent dir where script is located

Code of the sync script:

#!/usr/bin/env sh
set -e
set -x

if [ -z ${NAMESPACE+x} ]; then echo "NAMESPACE env var is empty! Cannot proceed"; exit 1; fi


IMAGES_PATH=$(cat images_path)
for deployment_file in *.deployment; do
    DEPLOYMENT_HASH=$(cat $deployment_file)
    DEPLOYMENT_NAME=$(echo $deployment_file | sed 's/\.deployment//')

Code is really simple – script takes NAMESPACE as parameter and then iterate over all deployment files making kubectl set image on every deployment. Kubernetes will do nothing when you try to set image with the same value which is already in deployment so making such set many times is idempotent. When some new image hash appears kubectl set image cause rolling upgrade on deployment.

Deployment procedure

Whole infrastructure is in place so now to create CD pipeline from development branch we must just place code which will:

  • read current git HEAD hash
  • tag docker with git HEAD hash and push it
  • commit to state repository new hash to file and push it

After such actions we can expect in around 1 minute that sync-agent will clone new state and make synchronization on Kubernetes cluster.

To make on-demand deployment on some stable environment we can just clone repository on our PC and place proper hashes in files manually and again sync-agent will do the work for us.