How we customize continuous delivery pipelines

Tarasov Aleksandr
5 min readDec 20, 2019

--

Photo by JJ Ying on Unsplash

We love microservices. We divide and conquer to deliver faster. We minimize the risks to achieve the best customer service.

In ANNA, we have near the hundred deployment units, including backend services, web frontend parts (yep, we have no monolith frontend app), batch jobs, and more.

Every unit should be built, tested, passed quality gates, deployed, and monitored after. Some time ago, we did all these actions in TeamCity. Each service had its build configuration based on the template. We did that for unification and customization. We had the template consists of all possible CI/CD steps and controls them via parameters that developers could override for their deployment unit.

Old scheme benefits:

  • unified build and deployment process with customization
  • teams know TeamCity well, new developers typically too

Old scheme drawbacks:

  • teams have different build needs, so one moment we split the template into two
  • hard to support and make changes because of many shell scripts with conditions in build steps
  • hard to parallel build steps (especially for the free version)

We found a solution in CI/CD separation of concerns.

Continuous Integration (CI)

Main goals:

  • fast feedback about code quality and correctness
  • ongoing checks after a merge in branch/develop/master
  • build artifacts (libs or docker images)

Who are owners? CI process is owned by teams and could be different from team to team, from service to service. At the moment, some teams prefer to use TeamCity, another — GitHub Actions.

Continuous Delivery (CD)

Main goals:

  • fast feedback about an artifact, including alerts, notifications
  • ongoing checks via quality gates
  • deliver product value to customers

Who are owners? CD process is owned by the SRE team and could not be changed by a team for any service that should be delivered to production. For CD at the moment, we use Jenkins with a declarative pipeline.

CI/CD separation of concerns diagram

Pipeline customizations

Left behind the CI process and dive deeper into the CD pipeline. As I said before, we have many services, and in the ideal world, we could pass all of them through a golden pipe.

But we have reasons to customize services delivery path:

  • each service has its quality requirements (for example we don’t want test back-office web application via customer’s mobile app)
  • each team wants to get notifications about deployments into its slack channel
  • we wish to deliver pipeline stages via feature-toggle (some time ago we started to create a changelog automatically, and we wanted to try this feature for one service only first)

That’s how our typical pipeline looks like:

As you could see, some stages are skipped because auto changelog is not supported for the service, and some kind of tests are not relevant as well. So, we do not want to spend our time and resources to do these actions. It saves time and costs.

Pipeline customization options

  • pipeline job per service with auto-updates (JobDSL, Jenkinsfile or another way)
  • common pipeline job with parameters
  • common pipeline job with external configuration

The first way is hard to support by SRE and developers together, and you could not see the whole picture in the Jenkins dashboard. It is tricky to observe near 100 jobs with out-of-box UI (classic and blueocean).

The second way requires support passing parameters at least at two CI systems, hard to documentation, and looks some terrifying:

> curl -L -u $JENKINS_AUTH https://jenkins/job/deploy-job/buildWithParameters?token=$JENKINS_JOB_TOKEN&service=$SERVICE_NAME&version=$TAG&...and 100500 parameters next

The third way with external configuration requires some code but looks better for developers and could be well documented, so we mix second and third options.

Our way

We leave the minimal set of parameters that are required to start the pipeline: service name, a version to deploy, and some system params like job token.

Other parameters like slack channel, quality gates configuration, changelog feature we put into OneYAML deployment definition (you could read about it in my previous article) where developers describe their services in a declarative way:

app_name: notificationscpu: 0.5
mem: 256.0
mem_limit: 512.0
test:
instances: 2
prod:
instances: 4
pipeline:
slack:
channel: "#ci_banking"
tests:
component_api: True
component_ws: False
regression_api: True
regression_mobile: True
changelog:
enabled: True

We deploy our services via Ansible, so we made a role that could generate a configuration for the pipeline by OneYAML definition:

container('deploy') {
sh 'ansible-playbook app-generate-pipeline-config.yml --extra-vars "@anna-$app_name.yml"'
}

And after the action we could see generated config in a job log:

[Pipeline] sh
+ cat pipeline_config.yml
run_component_api_tests: True
run_component_ws_tests: False
run_regression_api_tests: True
run_regression_mobile_tests: True
slack_channel: "#ci_banking"changelog_enabled: True
changelog_slack_channel: "#changelog"
run_security_checks: True

I bolded config properties that are not presented in OneYAML because every config property has a default value, and developers could fill properties that have different values than the default. It makes our service definitions short and precise.

But the most important thing that we could quickly introduce a new pipeline step make it turn-off by default, test it on a limit number of services and only after that promote other services.

And the icing on the cake. We keep all services definitions in one place and could easily search and replace these properties to eliminate obsolete or refactor them.

Then we use the Pipeline Utility Steps plugin to read properties inside the pipeline and use them in different stages:

script {
def pipelineConfig = readYaml file: 'pipeline_config.yml'
...
changelogEnabled = pipelineConfig.changelog_enabled
}
...stage('Post-Production Actions') {
parallel {
stage('Send Changelog') {
when {
expression { changelogEnabled }
}
...
}

...
}
}

Voilà!

Benefits:

  1. teams could choose their preferred tool to their CI process
  2. unified way to deploy any service controlled by SRE team
  3. developers manage the pipeline steps
  4. no knowledge of Jenkins pipeline required for developers
  5. the configuration is agnostic to CD engine

Drawbacks:

  1. YAML is hard to be checked and verified
  2. the solution involves code and support that takes time

--

--

Tarasov Aleksandr

Principal Platform Engineer @ Cenomi. All thoughts are my own.