Continuous Delivery requirements and tools

Everything about Continuous Delivery

In this post, I will cover:

  • What continuous delivery is?
  • What are the code level requirements for continuous delivery?
  • What are the architectural requirements for CD? Monoliths vs Microservices.
  • Mutable vs Immutable servers.

And after talking about all of these facets, I will review some actual deployment methods and then I will move on to review what tools are available to help create a continuous delivery process.

Continuous delivery, also known as CD, is a way of building software such that it can be deployed to a specified environment, whenever you want to. In particular, you should be able to deploy to production and ideally with one command or a push of a button.

Now this means that you should be able to select the version of software you want. Usually this will be the latest version to pass all the tests, and select the environment you want to deploy this to, push a button and have all of the code deployed.

If you’ve ever deployed something of any real complexity, then you already understand the value in this. If you’re doing deployments on evenings and weekends, it’s a sign that something is wrong with your deployment process. And if you’re concerned that a deployment to production is going to break something, then again, something may be wrong. Now, I’m not suggesting that nothing will ever break in a continuous delivery set-up. But it should be the exception and not the rule.

In my Continuous Integration post, I mentioned that CI, as a process was responsible for code level testing. Its job is to ensure that the code is in a working state. If code from your CI server is always in a working state, then your continuous delivery server can pick up that code and start in on its’ process.

Continuous delivery should start by deploying your software to a testing environment that mirrors production. Now it can be a scaled-down version and that’s okay. And then it should run the automated acceptance tests. The job of your acceptance tests is to ensure that the requirements for your software have been met. Automated acceptance tests tell the developer when they have completed their tasks. And they also serve as a set of regression tests and that just means that they can ensure that no new code changes have broken existing functionality.

Now, if you’re automated acceptance tests fail, then the process should stop. The developer should be automatically notified and someone should be immediately assigned to resolve those issues. When the acceptance tests are passing, then any non-functional automated tests can be run. I like to have load testing performed here. As well as more in depth security audits, by automating scans that pick off the low-hanging fruit. Things like SQL injection, cross site scripting and known misconfigurations. Developers, ops and security engineers should collaborate on a testing plan for these automated tests.

Once your automated tests have passed, downstream teams can deploy the software to a testing environment of their choosing and perform any required manual tasks such as user acceptance testing. Assuming your software passes all of the manual tests, then you’re ready to deploy to production.

Code level requirements summary

  • Feature toggles, and in particular release toggles are useful in small well thought out doses.
  • Modular coding practices help to make code that’s easier to work on, and easier to test.
  • And things like dependency injection can help.
  • And last, security is everyone’s job, and that includes the developers.

Of these 4, I will explore what Feature toggles are, as rest of them are quite self-explanatory.

Feature toggles

What are feature toggles?: If you have a feature that takes a while to implement, then it will require some way of being deployable while it’s an incomplete state. The optimal solution is to develop the code in a very modular and incremental way. And by doing this, you’re not really introducing any additional technical debt.

For new functionality, this method is not too difficult. However it becomes more difficult when we need to make changes to existing code. Especially when the changes we’re making span over a multiple sprint period.

So, how do we deal with features that span across multiple sprints?

We can use a technique called feature toggles. Feature toggles are a way to have your code checked and see if a given feature is enabled, and if so it’ll allow the code to execute.

Release toggles

Now there are different types of feature toggles for different scenarios, however, I’m going to focus on release toggles. Release toggles are a method that allow developers to conditionally execute code based on the state of the toggle. What this means is, as developers, we can build our code to check and see if a feature is enabled. And if so, then we can execute the code, and if not we can just ignore it. Allowing new features that are not complete to be deployed and not break anything.

Alright, now before you go off and wrap everything in a million feature toggles there’s a downside you need to think about first.

Release toggles are meant to be short lived, and not used indefinitely.

They’re a short term step to help ensure that you can continue to deploy code to your main line while creating features that go across multiple sprints. And the result of these toggles are what we call technical debt.

Technical debt is a concept where developers save time in the short term by doing something the easy way, but have to spend more time later to redo things the correct way.

And the more technical debt that builds up, the more difficult a project can become to work on over time. So using release toggles creates technical debt, because we have to remove them later once the feature is complete. We also have to consider how these toggles impact testing, because for each toggle there are two states that need to be tested, and it grows with every new toggle. At a certain point, it becomes something that won’t scale well, so we have to take care to make sure that we’re using these release toggles only when needed.

One final note on toggles, if you’re going to use them then you should have some sort of mechanism in place that can identify the state of all of the toggles. Debugging software can be difficult enough, adding in all these hidden toggles into the mix can be downright painful.

Architecture level requirements

Just like at the source code level, modularity at the application architecture level helps to improve the ability to practice continuous delivery.

It’s not uncommon for web applications to start out as monoliths, which basically means that all of the modules that comprise your software are in one application. For context, this is the opposite of the microservices architecture where different services are broken out into different discreet deployable applications. Amazon started out as a monolith, so did Netflix and Etsy, and while Amazon and Netflix have moved towards microservices, Etsy remains a monolith.

So, what’s the difference? A monolith is a singular application that contains all of the modules needed to perform its job. Now, that doesn’t mean that it can’t reach out to external services, it just means that all of the logic is in the same application, and it’s typically deployed as a whole.

Microservices are discreet services that serve a specific purpose, and microservices communicate at the API layer, allowing them to be replaced with anything else that reassembles that API. It’s similar to how dependency injection allows software modules to be swapped out with anything that implemented that shared interface.

Now, you may be wondering how any of this falls under the topic of continuous delivery. When it comes to deploying software often, a monolith can grow to a certain point where testing and deployments become very time consuming. Sure, they’re automated, though they can still become a bottleneck. Since continuous delivery is about deploying higher quality software, there are implications here as well. Now, I’m not suggesting that either one is inherently higher quality than the other.

However, monoliths can cause technology lock in, so older monoliths may be built on a technology stack that doesn’t promote best practices for modern software development, and large code bases in general risk getting to the point where it seems easier for developers to get something working rather than get it done correctly. Monoliths aren’t alone here. Because microservices are not all built with the same technology stack, it’s possible that the wrong tech for the job has been selected. However, in this case, refactoring should in theory be simpler.

The logical question becomes, when should you use a monolith, and when does it make sense for microservices? The short answer is, it depends. There’s really no one right answer, however I can share some of my opinions based on my time as a developer.

For greenfield development, which is a term that basically means a new project, I like to recommend that people start with a monolith, and here’s why. If you start by trying to break everything out into its own service at the beginning, you won’t have a clear enough picture where to start. You’ll try and define your service boundaries as best you can. However, you don’t know what you don’t know. There’s an expression that says hindsight is 20/20, which means, when looking at something after it’s happened, things become obvious. After all is said and done, you have all the information you need to know what you should have done. To start out with a monolith allows you to gain that hindsight as you go. Don’t be afraid to develop something that you know will be replaced when you have a clearer understanding of what you need.

Once a monolith grows too large, it becomes more difficult to have a lot of different teams working on it in parallel. They can take awhile to build and test as well, and they can cause technology lock in. However, up until that point, it does remain a valid option.

Now, if you already have a large monolith, or you’re doing brownfield development, which is building around or on an existing application, this is where microservices start to become viable. Once you have an understanding of the application, it’s requirements, the requirements of the users, etc., you can start to identify the areas of the application that could be refactored out into their own services. Or, identify new functionality that should be created as a microservice. This is where you get to rethink the technology that’s being used.

When considering breaking things out into their own tech stack, you can select the tool that’s right for that particular task. Microservices are basically single-purpose applications that interact with the rest of the world through a well established API, and because they’re single-purpose, they tend to be on the smaller side, at least compared to a large monolith. Because of this, developers tend to like working on them more, because unlike the monolith, they’re more easy to understand, because you can kind of review them holistically and understand all of the code. Microservices are a natural extension of modular software development, because you can replace an entire unit of functionality as long as it implements the same APIs.

However, because they’re isolated and they interact at the API level, they can also become a black box. With a monolith, tracing your request through the entire stack tends to be fairly simple, and there are a lot of great tools out there that can help give visibility to the inner workings of your application. If you implement microservices, you need to make sure you keep that same ability. If you can’t trace the request through the complete lifecycle, your ability to identify and resolve problems goes way down, and when bugs arise as they inevitably will, then you’ll struggle to fix even the simplest of them, and you’ll notice a drop in customer satisfaction because of this.

If you’re going to start implementing microservices, make sure you carefully consider the API implementation. Once it’s up and running, you’ll need to be careful about making changes that break things for other services that depend on your API. When implementing microservices, strongly consider the tech stack you plan to use. Make sure you’re not using something that’s on the bleeding edge of technology, unless you can wait for critical bugs in the tech to be fixed at the vendor’s or community’s schedule.

There’s no perfect architecture. Even the best thing we have at any moment in time may not be suitable as technology continues to evolve. However, if you strive to build things in a modular and traceable way, then you’ll be able to better adapt to future changes. So, at a certain size, monoliths become a bit of a bottleneck. They can take longer to build and test, and having teams build out different sections can impact other teams.

However, until you hit these limits, monoliths are a reasonable way to go. Once you do hit these limits, microservices can help break down the application into a more manageable set of services, allowing you to build, test, and deploy faster. However, it’s not without its challenges as we’ve talked about.

Microservices and monoliths are developed, deployed, and operated in similar, yet different ways. When considering your continuous delivery plan, you need to think about where your application is now and where it’s going.

We talked about code-level changes that may be required, and we’ve talked about some architectural changes that might be required. In our next lecture, we’re gonna talk about some of the infrastructure choices that you’ll have to consider.

We’re gonna talk about mutable versus immutable servers.

Mutable vs Immutable servers

Unless your code is all running in some serverless container, you’ll need servers to run your code, which means you need to think about how you’re going to manage your servers. In particular,

  • how you manage getting your software running on them?, and
  • how you go about handling changes in your software?

Configuration Management Tools: Years ago, servers were setup and configured one at a time by people, they’d get the operating system loaded, and get any software that was required running, and then they’d manage changes manually. Over time things started to get scripted out, adding a level of automation, repetitive tasks are boring and scripting them out saves a lot of time. Eventually, configuration management tools became the ideal way to manage server setups. Configuration management allowed engineers to specify the desired state of the server, and how does a configuration management tool ensure that it happens.

A Configuration Management Tool ensures that the things specified in code will get implemented.

Tools like Chef, Ansible, Puppet and SaltStack perform this task quite well. They allow for engineers to basically list, in code, the things that need to be installed and the versions of those things, and these tools can ensure that it happens.

With these tools, server setup and configuration no longer required engineers to handle it manually, and, they replaced random scripts since these tools added some level of consistency across the industry. (No more dealing with Unix Teams and Middlware Teams)

With tools like these, we no longer had to worry about snowflake servers, which is a term that means a server that’s unique and difficult to reproduce, and instead what we have are phoenix servers, which means that if a server was to die, it could be reborn. Having the ability to treat servers as disposable, because we can recreate them at will, gives us a lot of power. With such a wide range of tools for configuration management, we shouldn’t have snowflake servers any more.

All servers should be phoenix servers, and the configuration scripts should be under version control. So, if the ability to get a server into our desired state is good, because we’re working off a known working configuration, then immutable servers are the next step in that evolution.

So what do I mean by immutable servers? First, let me explain what a mutable server is. Mutable servers are servers whose configuration and settings will change over time, and if you’re updating the operating system, or your software, adjusting firewall rules or really, any change, then it’s a mutable server. Which means, an immutable one is a server whose settings don’t change, the server is only ever replaced.

Immutable server means the server doesn’t get any changes (updates/patches etc), they only get replaced.

I want to make two points of clarification. First, when I talk about servers here I’m really referring to virtualized instances, I’m not necessarily suggesting that physical servers be treated as disposable. And second, the term immutable in the context of a server, is a bit of a misnomer, because things like memory and log files will change, however, the term is meant to apply once the configuration is set and everything is loaded, no other outside changes will be made.

So we have these two models, mutable and immutable. If you’re using some form of configuration management, to either configure the server in the case of a mutable server, or to configure a base server image in the case of an immutable server, then both of these options are viable.

Both options can be deployed in a sustainable way, they’ll probably just require some slightly different tools. So what are the pros and cons for each? Let’s start with mutable. As we talked about before, if you’re using some form of configuration management, so that your servers can be easily configured into a known working state, then mutable servers are perfectly viable.

Pros and Cons of Mutable Servers

Here are some of the pros:

  • There are some fantastic tools out there for handling configuration management.
  • It can be useful for small projects or small teams that don’t want the extra overhead of managing virtual machine images.
  • Next, ad hoc commands for things like security patches are really rather simple.
  • There’s a lot of good resources for configuration management tools regarding deployments.
  • And finally, having configuration management scripts under version control allows shared ownership and the ability to rollback changes in the scripts when needed.

Here are some of the cons.

  • When an upgrade or deployment fails, the server can be left in a broken state, resulting in either troubleshooting what went wrong, or killing those instances and building based off of the previous configuration management settings.
  • Next, because we’re making changes over time, we’re not starting with a known working configuration each time we deploy.
  • And, depending on how we implement our configuration, we may have to use our configuration management tool to handle things like scaling, and we could lose out on some of the functionality that’s built in to our cloud platform.
  • And finally, any change to the OS needs to be tested separately to ensure that nothing breaks, what I mean by this is, if we did something like remove an OS package that is no longer needed, then we need to first test it out in a testing environment to ensure that it won’t break anything in our production environment.

Pros and Cons of Immutable Servers

So, let’s talk about immutable. Like we talked about previously, immutable is the natural evolution of configuration management. Once a server is in a known working state, we can snapshot it and consider it to be production ready. This is a lot of value, and has become the method of choice for me.

Here are some of its pros.

  • Having the server in a known working state for each deployment gives us a higher level of trust in it.
  • Next, we can typically use deployment and scaling features that are available with our cloud platform, such as auto scaling with AWS.
  • Once a server image has been created, scaling out is a relatively quick process.
  • Next, if you attempt to deploy changes and for any reason they fail, it’s a matter of using the previous server images. And since ad hoc commands shouldn’t be run, you ensure that your operating system changes trigger a kickoff of the complete continuous delivery process, which allows your OS changes to be tested via our testing gates.
  • So, no additional testing process is required for OS level changes.

And here are some of the cons.

  • The build times are longer because you’re merging a base operating system with your application and creating a server image based on that.
  • So, the baking process also creates server images that you need to store and manage.
  • Next, any changes in the operating system require a bake and redeploy, which again, can be more time consuming.
  • And finally, in addition to using configuration management tools to configure our base image, we’ll need additional tools to handle the baking and deployment. Now this isn’t a problem per se, however I don’t like introducing new tools without a really good reason.

So, both options are viable, as long as you’re using phoenix servers. I like to use an immutable server model because your code and operating system are tested together, and then, once it’s working, it’s basically shrink-wrapped, so you’re always using the exact same, well-tested configuration. And there are a lot of great options for tools out there that will help with this. It does add some time to the deployment process upfront by baking these images, but, making changes go live and scaling them is pretty easy.

So, now that we’ve covered both mutable and immutable servers, we should take a look at actually deploying an application, and that’s what we’ll cover next.

Deployment Strategies

Blue-green deployment strategy

This is also sometimes called red-black, however the colors are not important. They’re just placeholders to represent a group of servers.

Blue-green deployment strategy helps to deploy new releases while minimizing downtime. Here’s how it works at a very high level. Now this isn’t specific to any particular cloud vendor, this is just a generic implementation. So you’ll be able to implement this pattern with just about any cloud platform.

You have two environments, named green and blue. And you have some sort of routing mechanism to choose which one is live, based off of which one it sends traffic to.

Let’s say that green is currently live, and you want it to deploy your latest changes, then you’ll deploy the latest build to the blue environment, and if everything looks good, you can tell the router to swap the traffic from green to blue. And now blue is live and the green has the previous version of our code. So, if we need it to roll back, all we need to do is tell that router to switch back to green. This doesn’t mean that you have two production environments running at all times, because all of your servers should be phoenix servers anyway. You should be able to have the second environment created whenever it’s needed. And once the deployment is complete, and everything looks good, you can remove that environment. After all, in a worst case scenario, you could always have the environments spooled up with the previous version, and have the router send traffic to that. So, blue-green allows you to swap out environments with the flip of a switch, and if something goes wrong, you can reset it by flipping that switch again.

Canary Deployment strategy

Canary deployments are where you deploy the latest version of your software into a production environment and you have a router select a group of users to route to it, and you can see how it behaves before making the decision to roll it out further, or remove it entirely.

What is a canary?: Canary deployments get its name from the practice of coalminers bringing canaries into the mines with them. Canaries, being more sensitive to the effects of toxic gases like carbon monoxide, serve as an early warning system for the miners. While the canary was happy and healthy, then the miners weren’t at risk of carbon monoxide poisoning. However, if the carbon monoxide gases were present, then the canary would succumb to the effects, and sadly, it would die. And in this case, it would alert the miners to the threat. Now as sad as this was, these little heroes detected the threat early and potentially saved many lives.

So this style of deployment, like the practice it’s named for, is about detecting problems early. Canary deployments require the application to be rolled out to a small number of users. And you get to choose who that group of users is. They could be a random sampling, or it could be a specific group, possibly based on geographic location or some other attribute. However, you break it down, you start with a small group. And then you monitor the usage, looking for any problems. After all, it’s supposed to be an early warning system.

If you monitor your environment, then you’ll be able to automate the process of comparing the metrics of the baseline against the metrics of the canaries, and get a basic score for how the canaries are doing. If the score’s high enough, based on the threshold that you set, then everything looks good.

So if everything is going well, you can either choose to increase the roll out incrementally, or just fully deploy that version.

And if things are not going well, then you can just remove those canary servers from that environment, or redirect traffic back exclusively to your production environment.

CD tools

When it comes to continuous delivery, a lot of discussions immediately shift towards the supporting software and tooling options. I’ve intentionally held off on talking about tooling too much because I didn’t want to make it seem like continuous delivery is all about the tools because the tooling we use is to support people and practices.

However, now that we’ve talked about continuous delivery and what it is and why it’s useful, also some of the things that are involved to support it, it’s now time to review some of the tools.

We’re not going to go into depth. This is gonna be a quick review to kind of highlight some of the tools that are out there that can help when building a continuous delivery process.

Jenkins

Jenkins is an open source automation server. It just released version 2.0 recently and has a lot of plugins available and has become my go-to for CI/CD servers.

TravisCI

Next we have Travis CI. Travis is a CI/CD server that has a very clean and easy to use interface and it also has a hosted version that allows you to focus on writing your code and not setting up a build server. That hosted version is also free for open source projects. So try it out for your next GitHub-based project.

GoCD

GoCD. GoCD is another open source continuous delivery server and it has a very intuitive pipeline view. After using it for a short time, you start to feel very comfortable using it. This is a great tool and one that I’d like to do a bit more with.

TeamCity

Next we have TeamCity. TeamCity is a proprietary CI CD server that has a free version for a limited amount of builds. I’ve had great experiences with TeamCity. TeamCity has a lot of functionality baked in and it runs on both Windows and Linux. More for the .Net stuff. So, not my cup of tea.

Now this isn’t an exhaustive list. There are a lot of similar tools out there. Some are well supported and others less so and when selecting an automation server, look for something that can be extended via some sort of plugins. As your project grows, your automation tasks will also and you don’t want to outgrow your automation servers too quickly. So being able to create plugins will help to support any changes.

Configuration Management tools

Alright, now let’s check out some configuration management tools.

Configuration management offers the ability to manage your servers and infrastructure in a scriptable way. There’s no shortage for configuration management tools and there are some great options and here’s a few of them that I like.

Ansible

First, we have Ansible. This has become my current favorite for Linux-based tasks. You define your tasks in a YAML playbook and combine and run these playbooks to have your servers managed. You can also run ad-hoc commands on a set of servers to patch things like security holes or update software. It does require Python to be loaded on the servers it’s managing, which for the most part exists on Linux systems.

Chef

Next up is Chef. This is a Ruby-based tool. It uses a Ruby DSL to allow engineers to script out their tasks with a full programming language. Oftentimes this is an easier task for developers than those without a lot of experience writing code. Chef has been around for a very long time and has a very large community around it. So finding documentation and examples tends to be fairly easy.

Puppet

Next, we have Puppet. This is a tool that allows you to specify the desired state of your server or infrastructure for that matter and tell Puppet to make sure that things match up with the desired state. Puppet has its own language and it’s a bit different from languages such as Ruby or Python. The syntax looks more like a configuration file and may be easier for non-developers to pick up and run with.

PowerShell DSC

Next up is PowerShell DSC. If you’re on the Windows side of things, you can certainly use tools mentioned above, however, you can also use Microsoft’s own desired state configuration tool. Now, PowerShell lives up to its name. It offers a rich set of tools for managing a Windows environment on the command line and PowerShell DSC takes that a step further and also applies that to Linux servers.

As I mentioned before, there are a lot of great tools out there. When selecting a configuration management tool, make sure you review your needs and the tools carefully. Investing a lot of time into one of them to learn that it’s not right for you can be costly. For example, if you’re a Ruby development team, then Chef may be a good choice for you because extending it when needed will be fairly easy since those skills already exist on the team.

Immutable Server Tools

Next, let’s take a look at a couple of tools that will help if you choose to use immutable servers.

Packer

First up is Packer. This is a great tool by HashiCorp. It allows you to create different machine images from a single configuration. So you’ll be able to generate a snapshot of a server that’s based on some image, provision it with your latest software, and then save that image to a different format such as Amazon AMI or Azure VMs among others.

Spinnaker

Next we have Spinnaker. Now I was tempted to include this in the automation server section we looked at earlier, however, because this is really about baking and deploying your machine images, I figured I would leave it here. This is a tool that has been open sourced by Netflix. It allows you to trigger builds in different ways including from Jenkins and then it takes your base machine image and installs your software on it. For this, it assumes the use of the operating system’s package manager and bakes that image and saves it to your cloud provider. From here, it allows you to deploy to different environments. Now this is a very powerful tool and one that I recommend that you check out.

Since we talked about immutable servers, it’s only natural that we talk about containers since containers are another form of immutable server. The use of containers makes the phrase “it works on my machine” a bit less rage-inducing because the development environment and production environment should be the same. Use Docker nothing else.

Summary

  • First, if you’re going to deliver code regularly throughout the day, it needs to be able to run, even if some features aren’t complete.
  • Next, modular coding practices will make for code that’s easier to test and maintain.
  • Number three, security needs to be considered throughout the entire software life cycle.
  • Four, start most all new development as a monolith.
  • Number five, gradually refactor monoliths that are too large into microservices.
  • Number six, immutable servers are great, however mutable servers can also work as long as you’re careful to ensure that they’re not snowflake servers.
  • Number seven, there are a lot of options for no or low downtime deployments, such as blue, green, and canary.
  • Number eight, the list of tools that exists in DevOps and the Continuous Delivery space is longer than the Great Wall of China. Well, not really, but it’s pretty long.
  • Number nine, your CD process should make your releases so boring that you may not even know that new changes went live.
  • And number 10, finally, all companies that are considered unicorns are using continuous delivery practices, and it’s not a secret why.

Tags:

Updated: