Provision microservice’s pipeline on Jenkins using CustomResourceDefinition and Operator on Kubernetes

1_l_MvKlG3R7VCC6aGt53w_A

Overview

In this article I want to show you how to create custom resource in Kubernetes so we can create just another resource which provision CI/CD pipeline on Jenkins for microservice. To achieve such goal we will use operator-sdk CLI and write code in Go to implement integration with Jenkins using API.

Everything will happen in local environment on Minikube – Jenkins will be also deployed there using official Helm chart. Repositories will be created on Github and Dockerhub. Github respositories related to that article:

https://github.com/jakubbujny/article-jenkins-pipeline-crd

https://github.com/jakubbujny/article-microservice1

https://github.com/jakubbujny/article-microservice2

https://github.com/jakubbujny/article-jenkins-pipeline-crd

Jenkins deployment on Kubernetes

Jenkins will be deployed on Kubernetes using official Helm chart which contains already Kubernetes plugin to spawn build agents as separated PODs.

At first we should start Minikube with memory increased as Jenkins and agents are Java processes so they are consuming quite big amount of memory:

minikube start –memory=4096

after start we can deploy Jenkins using following script:

#!/usr/bin/env bash

helm init --wait

helm install \
 --name jenkins stable/jenkins \
 --set master.csrf.defaultCrumbIssuer.enabled=false \
 --set master.tag=2.194 \
 --set master.serviceType=ClusterIP \
 --set master.installPlugins[0]="kubernetes:1.18.1" --set master.installPlugins[1]="workflow-aggregator:2.6" --set master.installPlugins[2]="credentials-binding:1.19" --set master.installPlugins[3]="git:3.11.0" --set master.installPlugins[4]="workflow-job:2.33" \
 --set master.installPlugins[5]="job-dsl:1.76"

Lines:

  • 7 – disabling CSRF make that example simpler as we don’t must deal with issuing/sending crumbs in API requests
  • 9 – we must change Jenkins service type as by default it starts as LoadBalancer what won’t work on Minikube
  • 10 – those are default plugins in Helm chart, provided in such strange form because of some issue in Helm – more info https://stackoverflow.com/questions/48316330/how-to-set-multiple-values-with-helm
  • 11 – We need job dsl plugin what will be described in next section

after those operations Jenkins should start – to access it we can use following command:

kubectl port-forward svc/jenkins 8080:8080

kubectl will create proxy for us so we can see Jenkins UI under localhost:8080. Login is admin and password can be obtained by using following command:

printf $(kubectl get secret –namespace default jenkins -o jsonpath=”{.data.jenkins-admin-password}” | base64 –decode);echo

Seed job

Main concept will be based on seed job – Jenkins job DSL script which will provide pipelines for microservices. Such seed job will contain following code:

projects.split(',').each { project ->
  pipelineJob(project) {
    definition {
      cpsScm {
        scm {
          git {
            remote {
              url("https://github.com/jakubbujny/article-${project}.git")
            }
            branch("*/master")
          }
        }
        triggers {
           scm("* * * * *")
       }
        lightweight()
        scriptPath('Jenkinsfile.groovy')
      }
    }
  }
}

Projects variable will be provided as job parameter – that parameter will be modified by CRD Operator on Kubernetes when new resource is created.

Job DSL script for each project (microservice) will create pipeline job with Github project as source. That pipeline will use Jenkinsfile located in microservice’s repository and will have trigger based on repository pull so every minute repository will be pulled but pipeline will be triggered only when changes are detected.

Jenkinsfile.groovy source

microserviceName = "microservice1"

pipeline {
    agent {
        kubernetes {
            //cloud 'kubernetes'
            label 'mypod'
            yaml """
apiVersion: v1
kind: Pod
spec:
  serviceAccountName: cicd
  containers:
  - name: docker
    image: docker:1.11
    command: ['cat']
    tty: true
    volumeMounts:
    - name: dockersock
      mountPath: /var/run/docker.sock
  - name: kubectl
    image: ubuntu:18.04
    command: ['cat']
    tty: true
  volumes:
  - name: dockersock
    hostPath:
      path: /var/run/docker.sock
"""
        }
    }
    stages {
        stage('Build Docker image') {
            steps {
                checkout scm
                container('docker') {
                    script {
                        def image = docker.build("digitalrasta/article-${microserviceName}:${BUILD_NUMBER}")
                        docker.withRegistry( '', "dockerhub") {
                            image.push()
                        }
                    }
                }
            }
        }
        stage('Deploy') {
            steps {
                container('kubectl') {
                    script {
                        sh "apt-get update && apt-get install -y curl"
                        sh "curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.15.0/bin/linux/amd64/kubectl"
                        sh "chmod +x ./kubectl"
                        sh "mv ./kubectl /usr/local/bin/kubectl"
                        def checkDeployment = sh(script: "kubectl get deployments | grep ${microserviceName}", returnStatus: true)
                        if(checkDeployment != 0) {
                            sh "kubectl apply -f deploy/deploy.yaml"
                        }
                        sh "kubectl set image deployment/${microserviceName} ${microserviceName}=digitalrasta/article-${microserviceName}:${BUILD_NUMBER}"
                    }
                }
            }
        }
    }
}

That pipeline spawn POD on Kubernetes which will contain 3 containers – but we see definition only of 2 of them as 3rd container will be JNLP Jenkins agent. First container is used to build docker image with our microservice, tag it with BUILD_NUMBER, push to Dockerhub and second is Ubuntu where kubectl is installed to make deployment.

CD is simply made by updating docker image in particular deployment so it will be automatically pulled by Kubernetes from Dockerhub.

Please be also aware about ServiceAccount which must be installed on cluster named “cicd”:

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: cicd

---

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: cicd
rules:
  - apiGroups: ["extensions", "apps"]
    resources: ["deployments"]
    verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
  - apiGroups: ["jakubbujny.com"]
    resources: ["jenkinspipelines"]
    verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: cicd
subjects:
  - kind: ServiceAccount
    name: cicd
roleRef:
  kind: Role
  name: cicd
  apiGroup: rbac.authorization.k8s.io

Cicd ServiceAccount has permissions to manipulate deployments and also to make operations over some special API jakubbujny.com and jenkinspipelines – that’s our CRD which will be described in next section. That permission is not really needed as “jenkinspipeline” resource should be installed by cluster admin but I left that to make example more clear.

CustomResourceDefinition and Operator

The final part is to make our own Operator with API definition. To do that we need Github repository and operator-sdk so we can start with following command

operator-sdk new jenkins-pipeline-operator –repo github.com/jakubbujny/jenkins-pipeline-operator

that command will create basic folders structure with boilerplate code.

As next we want to add our own API and Controller code which will react on changes in that API:

operator-sdk add api –api-version=jakubbujny.com/v1alpha1 –kind=JenkinsPipeline

operator-sdk add controller –api-version=jakubbujny.com/v1alpha1 –kind=JenkinsPipeline

We need field in our API definition to define microservice name for which pipeline should be created – let’s modify following file: pkg/apis/jakubbujny/v1alpha1/jenkinspipeline_types.go

type JenkinsPipelineSpec struct {
Microservice string `json:”microservice”`
}

now we must regenerate APIs definition so yaml configuration in deploy directory will be the same as our code. To do that we should issue command:

operator-sdk generate openapi

and the final and most important part is to write code in Go which will integrate with Jenkins – to make such integration we need to generate API token in Jenkins and pass it to the operator. I simply made that by environment variables in deploy/operator.yaml

Let’s go to the pkg/controller/jenkinspipeline/jenkinspipeline_controller.go – I will describe only the most important part.

func (r *ReconcileJenkinsPipeline) Reconcile(request reconcile.Request) (reconcile.Result, error) {
	reqLogger := log.WithValues("Request.Namespace", request.Namespace, "Request.Name", request.Name)
	reqLogger.Info("Reconciling JenkinsPipeline")

	// Fetch the JenkinsPipeline instance
	instance := &jakubbujnyv1alpha1.JenkinsPipeline{}
	err := r.client.Get(context.TODO(), request.NamespacedName, instance)
	if err != nil {
		if errors.IsNotFound(err) {
			// Request object not found, could have been deleted after reconcile request.
			// Owned objects are automatically garbage collected. For additional cleanup logic use finalizers.
			// Return and don't requeue
			return reconcile.Result{}, nil
		}
		// Error reading the object - requeue the request.
		return reconcile.Result{}, err
	}

	resp, err := getSeedJob()

	if err != nil {
		reqLogger.Error(err, "Failed to get seed config to check whether job exists")
		return reconcile.Result{}, err
	}

	if resp.StatusCode == 404 {
		reqLogger.Info("Seed job not found so must be created for microservice "+instance.Spec.Microservice)

		resp, err := createSeedJob()
		err = handleResponse(resp, err, reqLogger, "create seed job")
		if err != nil {
			return reconcile.Result{}, err
		}

		resp, err = updateSeedJob(instance.Spec.Microservice)
		err = handleResponse(resp, err, reqLogger, "update seed job")
		if err != nil {
			return reconcile.Result{}, err
		}
	} else if resp.StatusCode == 200 {
		reqLogger.Info("Seed job found so must be updated for microservice "+instance.Spec.Microservice)
		resp, err = updateSeedJob(instance.Spec.Microservice)
		err = handleResponse(resp, err, reqLogger, "update seed job")
		if err != nil {
			return reconcile.Result{}, err
		}
	} else {
		err = coreErrors.New(fmt.Sprintf("Received invalid response from Jenkins %s",resp.Status))
		reqLogger.Error(err, "Failed to get seed config to check whether job exists")
		return reconcile.Result{}, err
	}

	resp, err = triggerSeedJob()
	err = handleResponse(resp, err, reqLogger, "trigger seed job")
	if err != nil {
		return reconcile.Result{}, err
	}

	return reconcile.Result{}, nil
}
func handleResponse( resp *http.Response, err error, reqLogger logr.Logger, action string) error {
	if err != nil {
		reqLogger.Error(err, "Failed to "+action)
		return err
	}

	if resp == nil {
		return nil
	}

	if resp.StatusCode != 200 {
		err = coreErrors.New(fmt.Sprintf("Received invalid response from Jenkins %s",resp.Status))
		reqLogger.Error(err, "Failed to"+action)
		return err
	}
	return nil
}

func decorateRequestToJenkinsWithAuth(req *http.Request) {
	jenkinsApiToken := os.Getenv("JENKINS_API_TOKEN")
	req.Header.Add("Authorization", "Basic "+ b64.StdEncoding.EncodeToString([]byte("admin:"+jenkinsApiToken)))
}

func getSeedJob() (*http.Response, error) {
	req, err := http.NewRequest("GET", os.Getenv("JENKINS_URL")+"/job/seed/config.xml", nil)
	if err != nil {
		return nil, err
	}
	decorateRequestToJenkinsWithAuth(req)
	return (&http.Client{}).Do(req)
}

func createSeedJob() (*http.Response, error) {
	seedFileData, err := ioutil.ReadFile("/opt/seed.xml")

	req, err := http.NewRequest("POST", os.Getenv("JENKINS_URL")+"/createItem?name=seed", bytes.NewBuffer(seedFileData))
	if err != nil {
		return nil, err
	}
	req.Header.Set("Content-type", "text/xml")
	decorateRequestToJenkinsWithAuth(req)
	return (&http.Client{}).Do(req)
}

func updateSeedJob(microservice string) (*http.Response, error) {
	resp, err := getSeedJob()
	if err != nil {
		return nil, err
	}
	buf := new(bytes.Buffer)
	_, err = buf.ReadFrom(resp.Body)
	if err != nil {
		return nil, err
	}
	seedXml := buf.String()

	r := regexp.MustCompile(`<defaultValue>(.+)<\/defaultValue>`)
	foundMicroservices := r.FindStringSubmatch(seedXml)

	toReplace := ""
	if strings.Contains(foundMicroservices[1], microservice) {
		return nil,nil
	} else {
		if foundMicroservices[1] == "default" {
			toReplace = microservice
		} else {
			toReplace = foundMicroservices[1] + "," + microservice
		}
	}

	toUpdate := r.ReplaceAllString(seedXml, fmt.Sprintf("<defaultValue>%s</defaultValue>", toReplace))

	req, err := http.NewRequest("POST", os.Getenv("JENKINS_URL")+"/job/seed/config.xml", bytes.NewBuffer([]byte(toUpdate)))
	if err != nil {
		return nil, err
	}
	req.Header.Set("Content-type", "text/xml")
	decorateRequestToJenkinsWithAuth(req)
	return (&http.Client{}).Do(req)
}

func triggerSeedJob() (*http.Response, error) {
	req, err := http.NewRequest("POST", os.Getenv("JENKINS_URL")+"/job/seed/buildWithParameters", nil)
	if err != nil {
		return nil, err
	}
	decorateRequestToJenkinsWithAuth(req)
	return (&http.Client{}).Do(req)
}

Reconcile function will be triggered when state in Kubernetes must be synced so usually when new object is created. We start with getSeedJob() in line 19 and that function make request to Jenkins to check if seed job already exists – if not (404 code) it’s created with default config located in build/seed.xml and added to Operator’s docker image in build/Dockerfile.

If seed job already exists it must be updated to add microservice name to list of parameters what is done by using regular expressions over seed.xml job config to change default value for parameter.

After all program triggers seed job so it will be executed and new pipeline will be created. Now we should build and push operator docker image and then deploy it

operator-sdk build digitalrasta/jenkins-pipeline-operator

docker push digitalrasta/jenkins-pipeline-operator:latest

kubectl apply -f deploy

And now we can create pipeline for microservice by applying following resource

apiVersion: "jakubbujny.com/v1alpha1"
kind: "JenkinsPipeline"
metadata:
  name: "microservice1"
spec:
  microservice: "microservice1"

The one weak thing is we didn’t create Finalizer for JenkinsPipeline resource – it means after deletion of resource seed’s job parameter won’t be modified so pipeline for that microservice will still exists but Finalizer is topic for another article.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s