Build a Jenkins pipeline to upload a file to S3
Introduction
This post is a continuation of the previous post on Jenkins setup. In this post, we will aim to deploy our first pipeline that will lint
the index.html
file and upload it to AWS S3 bucket. For this we will use this github repo: sk_devops, which contains the Jenkinsfile
with our pipeline code. Before that, we need to install and configure Jenkins to talk to S3 and Github. We will see each of these in detail here.
Goal: Configure Jenkins plugins to talk to S3 and Github and build a simple pipeline which will upload a file checked into Github to S3. Each time you make a change to the file, the pipeline will be automatically triggered.
S3 Jenkins setup
First our goal is to configure our Jenkins environment so that it has the correct package to be able to copy something into S3. Login into Jenkins server running on your EC2 instance: http://ec2-54-175-86-99.compute-1.amazonaws.com:8080
, click on Manage Jenkins
-> Manage Plugins
which is our plugin manager, then click on Available
tab, then Filter
with AWS and then select Pipeline AWS Steps
- this is going to give us a whole lot of stuff. Click on Install without restart
. Once it finishes installing, click on Restart Jenkins
.
Another way to restart Jenkins is using command line: sudo systemctl restart jenkins
base) shravan-Downloads$ ssh -i sk_jenkins_ec2.pem ubuntu@ec2-54-175-86-99.compute-1.amazonaws.com
ubuntu@ip-172-31-47-188:~$ sudo systemctl restart jenkins
ubuntu@ip-172-31-47-188:~$
BlueOcean
BlueOcean is a skinjob
for Jenkins and it doesn’t really change the core functionality, it just presents it in a different way and you can always switch back and forth between the Jenkins Classic Interface and Blue Ocean. It also gives you some built-in diagnostics
Re-skins Jenkins to make management easier
Built-in diagnostics
Blue Ocean Setup Summary:
- One repo per project
- Git repos can be re-used in multiple pipelines
- Make Jenkins become IasC (Infrastructure as Code)
Adding github repo to pipeline
So, up until now, we have S3
plugin installed and configured, and we are ready to go about setting up BlueOcean
with GitHub
. Login to the Jenkins console and click on Open Blue Ocean
. This will now prompt you if it is the first time, to go ahead and create a new pipeline. And it gives us the following options by default:
We will choose Github
, it will then ask you to create an Access Token
to access github. Give it a name and click connect.
Once your github repo is connected, it will be give you this message saying that you don’t have a Jenkins File in any of your branches.
Note: In order to store Jenkins configurations as code it is necessary to use pipelines.
Multiple Pipelines
Now we are going to look at how to segregate our environments. So you will be developing your code which will be done in a development environment. Then, when you have QA engineers test it, you want to do that in a staging environment. That’s because, you don’t want to override what they are doing. So you want to allow them to do their testing work, and yet you will still be able to develop your code.
Development Pipelines:
- Development pipelines are kicked off very frequently and with continuous deployment will automatically update servers
Staging Pipelines:
- Staging is where QA will test the environment so this needs to be kept more static to prevent interruptions.
Pipeline Triggers
Now let’s take a look at setting up a trigger, so we will continuously, “automagically,” check if our GitHub software has been updated, triggering the pipeline processes.
The main feature we are concerned here is how you push off a continuous integration? The way that works in practice is with a trigger. This is very fundamental to the way that you do this action and it involves modifying things both in your Github repository and in your Jenkins interface so that they both communicate with each other, for instance, “hey, i have got this new branch that has been pull requested” or “Hey I have taken this new branch and I have built a new job”. So there’s this give-and-take that goes on between those components and that’s what we are going to focus in on next. We will see how to get that configured. So, next we will see how this all works in the Jenkins interface.
When code is pushed to the git repo, and it gets merged through a pull request, a build will automatically kick-off, i.e continuously integrate.
If the tests in a pipeline pass, deploy the code, i.e. continuously deploy
How to trigger the pipeline?
Alright, so lets look at how we can trigger something to happen automatically. Specifically, what we are concerned about is how to make it so that it can continuously check Github and see if the information in github has changed, versus the cached storage information which is on Jenkins.
First, lets click on sk_devops
repo, and then click Configure. We are concerned with Scan Repository Triggers
and click on the Periodically if not otherwise run option. Set it to 1 minute
.
Great, so now we have set it up such that anytime that new information is pushed out to Git, it will attempt to do something with that information, i.e. build.
Example: Define a pipeline
A Pipeline can be created in one of the following ways:
-
Through Blue Ocean - after setting up a Pipeline project in Blue Ocean, the Blue Ocean UI helps you write your Pipeline’s Jenkinsfile and commit it to source control.
-
Through the classic UI - you can enter a basic Pipeline directly in Jenkins through the classic UI.
-
In SCM - you can write a
Jenkinsfile
manually, which you can commit to your project’s source control repository.
The syntax for defining a Pipeline with either approach is the same, but while Jenkins supports entering Pipeline directly into the classic UI, it is generally considered best practice to define the Pipeline in a Jenkinsfile which Jenkins will then load directly from source control.
A Pipeline can be generated from an existing
Jenkinsfile
in source control, or you can use theBlue Ocean Pipeline editor
to create a new Pipeline for you (as a Jenkinsfile that will be committed to source control).
Through Blue Ocean
Blue Ocean makes it easy to create a Pipeline project in Jenkins. A Pipeline can be generated from an existing Jenkinsfile in source control, or you can use the Blue Ocean Pipeline editor to create a new Pipeline for you, this will create a new Jenkinsfile and also commit it to github.
Example pipeline created using Blue Ocean console:
Example Jenkinsfile that got created and checked into github automatically:
Through SCM (github)
Define your own Jenkinsfile which describes your pipeline. A pipeline contains stages and each stage can contain multiple steps. In this example shown below we have 2 stages with 1 step each in the pipeline. Push the changes to your feature1
branch and because you have set the Periodically scan
setting above, it will automatically build/trigger the pipeline.
pipeline {
agent any
stages {
stage("Hello") {
steps {
sh 'echo \'Hello Shravan\''
}
}
stage("Lint HTML") {
steps {
sh "tidy -q -e *.html"
}
}
}
}
Here’s the output:
Pipeline Testing
Now we are going to get into testing here. For this, we will use a simple HTML example. So, to get started, go to the command line prompt and do a sudo apt install tidy
.
ubuntu@ip-172-31-47-188:~$ sudo apt install tidy
ubuntu@ip-172-31-47-188:~$
This will put the tidy
package on to our system. So now, we are going to go ahead and use our development pipeline.
Installing AWS CodePipeline
plugin and configuring AWS creds
- Install Jenkins plugin AWS CodePipeline.
- Set up your AWS credentials with your access key and secret access key in Credentials.
- Create your S3 bucket (must be unique).
- Set up your pipeline. Note: your bucket name can’t be the same as mine.
- Screenshot a successful run and compare it to mine below.
Start by installing this plugin, click install without restart.
Click Restart.
Once it is installed, in the left-hand pane, you should see Credentials
.
Click on Credentials, then on global
-> Add Credentials
, which will take you to:
Now, add another stage in your Jenkinsfile for uploading the index.html
file to a S3 bucket
.
Open Blue Ocean
to check the pipeline: