Setup a Jenkins Local DevOps Environment using Docker and WSL2

  • Deploy the script/app to target s3 bucket on our local mock s3 service, where in a real world scenario, might be a component of an AWS hosted process

Before Commencing…

The details of the base environment used for the setup are outlined below. You should be able to adapt the instructions to work with your preferred host OS.

  • WSL2 enabled with Ubuntu set as the default distro, i.e. the output of the following command is identical to the output generated on your system
PS C:\> wsl --list --verbose                 
NAME STATE VERSION
* Ubuntu Running 2
Description:    Ubuntu 20.04.1 LTS
Release: 20.04
Codename: focal

1. Local Git Service for SCM — using Gogs

With the help of Gogs, we can spin up a local self hosted Git service as a container.

Running Gogs as a Local Git Server

To run a Gogs instance and have configuration/repo data persisted onto the host, run the following:

$ mkdir -p $HOME/dev-env/gogs/data
$ docker run -d --name=gogs -p 10022:22 -p 10080:3000 -v $HOME/dev-env/gogs/data:/data gogs/gogs

Configure Git Service

Visit http://localhost:10080/ and complete the initial setup. Below are sample config values.

Signup to Local Service and Create Repo

The first user to go through the signup process is allocated as Administrator. The below shows sign-up of initial user git-user.

Method 1: WSL2 Using HTTP

  • To initialise the repo and create/push a README.md, run the following on the host:
$ mkdir -p $HOME/dev-env/my-repos/first-repo
$ cd $HOME/dev-env/my-repos/first-repo
$ echo "# My First Repo">README.md
$ git init
$ git add README.md
$ git commit -m "first commit"
$ git remote add origin http://git-user@localhost:10080/git-user/first-repo.git
$ git push -u origin master
Enumerating objects: 3, done.
Counting objects: 100% (3/3), done.
Writing objects: 100% (3/3), 208 bytes | 208.00 KiB/s, done.
Total 3 (delta 0), reused 0 (delta 0)

Password for 'http://git-user@localhost:10080':********
[remote "origin"]
url = http://git-user@localhost:10080/git-user/first-repo.git
fetch = +refs/heads/*:refs/remotes/origin/*
$ git config -f $HOME/dev-env/my-repos/first-repo/.git/config \
--add user.name git-user

$ git config -f $HOME/dev-env/my-repos/first-repo/.git/config \
--add user.email git-user@localhost.com

Method 2: WSL2 Using SSH

If you prefer working with SSH, following the steps below to initialise and upload a README.md.

  • Update SSH config on the host to reflect the correct private key location to use for the SSH connection to the GIT service.
host localhost
IdentityFile ~/.ssh/id_rsa
$ ssh -T git@localhost -p 10022

Hi there, You've successfully authenticated, but Gogs does not provide shell access.
If this is unexpected, please log in with password and setup Gogs under another user.
$ mkdir -p $HOME/dev-env/my-repos/first-repo
$ cd $HOME/dev-env/my-repos/first-repo
$ echo "# My First Repo">README.md
$ git init
$ git add README.md
$ git commit -m "first commit"
$ git remote add origin ssh://git@localhost:10022/git-user/first-repo.git
$ git push -u origin master
[remote "origin"]
url = ssh://git@localhost:10022/git-user/first-repo.git
fetch = +refs/heads/*:refs/remotes/origin/*
  • At this point, you can start to work with the repo in VSCode by running:
$ cd $HOME/dev-env/my-repos/first-repo
$ code .

Working with Repos Outside of WSL2

Most (but not all) Windows git clients can work the local service provided the cloned repository’s config file is correctly configured.

2. Local S3 Cloud Service

The procedure for setting up S3Ninja is described in detail at this link. Note, this is by no means a full AWS S3 stack and its primary purpose is to allow us to create local S3 URIs with basic ls/put/update/delete functionality for use with Jenkins files.

$ mkdir $HOME/dev-env/ninja
$ docker run -d -p 9444:9000 -v $HOME/dev-env/ninja:/home/sirius/data scireum/s3-ninja:6.4
  • Stop/remove the container

3. Jenkins Docker Plugin and Agent Image

The Jenkins Docker plugin allows for creation of “teardown” Docker containers via the use of an agent/slave node. Docker Cloud templates, are configured through the plugin to use a specific “slave” image (spawned as a container), which itself is capable of running containers and interacting with the docker host. For our configuration, the local Docker host daemon can be accessed through socket address:

unix:///var/run/docker.sock

Creating a Custom Jenkins Agent Image

Using the Jenkins agent Dockerfile as a base, we can derive a custom agent image that bundles in a docker client static binary

FROM jenkins/agent:latest-stretch-jdk11

ARG user=jenkins

ENV DOCKERVERSION=19.03.12

LABEL Description="This image is derived from jenkins/agent openjdk11. \
It includes docker static binary"

USER root

RUN curl -fsSLO https://download.docker.com/linux/static/stable/x86_64/docker-${DOCKERVERSION}.tgz \
&& tar xzvf docker-${DOCKERVERSION}.tgz --strip 1 \
-C /usr/local/bin docker/docker \
&& rm docker-${DOCKERVERSION}.tgz

WORKDIR /home/${user}

USER ${user}
docker build --rm -t jenkins/agent:custom .

4. Jenkins Master Node

The following is what we aim to transform into a docker compose service later on, and will serve as the Jenkins master node.

$ mkdir $HOME/dev-env/jenkins_home

$ docker run -p 8080:8080 -p 50000:50000 -d \
-v $HOME/dev-env/jenkins_home:/var/jenkins_home jenkins/jenkins:lts
  • The initial admin password can be located via the host, at:
$ cat $HOME/dev-env/jenkins_home/secrets/initialAdminPassword

Install Docker, Docker Pipeline & AWS Plugins

  • Once config is complete, Install Docker Plugin by choosing Manage Jenkins --> Manage Plugins --> Available and searching for Docker.
  • Repeat the procedure for:

Configure Docker Cloud Template

The agent needs to be configured so that it knows which docker host to connect to, and the associated docker image to use for creating the agent slave container.

  • From the Add New Cloud drop down, choose Docker
  • Configure a new Docker Template as shown below.

Add Credential Provider for Git & AWS

Before configuring credentials for AWS Bucket and GIT repo, we need to add credential provider types.

Configure GIT & AWS Credentials on Jenkins Master Node

Configure Git credentials for the test repo first-repo. The below shows the setup for Git Username=git-user.

5. Create docker-compose.yml

The following compose file brings all the services together, into a docker network, with containers being assigned static IP addresses.

version: '3.7'
services:
github_mock:
image: gogs/gogs
container_name: github_mock
volumes:
- ./gogs/data:/data
ports:
- "10022:22"
- "10080:3000"
networks:
dev_ops:
ipv4_address: 172.16.238.2

s3_mock:
image: scireum/s3-ninja:6.4
container_name: s3_mock
hostname: aws-s3
volumes:
- ./ninja:/home/sirius/data
ports:
- "9444:9000"
networks:
dev_ops:
ipv4_address: 172.16.238.3

jenkins_master:
image: jenkins/jenkins:lts
container_name: jenkins_master
depends_on:
- github_mock
- s3_mock
volumes:
- ./jenkins_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
links:
- s3_mock
- github_mock
networks:
dev_ops:
ipv4_address: 172.16.238.4
ports:
- "8080:8080"
- "50000:50000"

networks:
dev_ops:
ipam:
driver: default
config:
- subnet: "172.16.238.0/24"
  • github_mock: 172.16.238.2
  • s3_mock : 172.16.238.3
  • The Jenkins docker agent container is not referenced in the docker-compose. It is provisioned as requested by pipeline node/agent directives. In the configuration of the agent Cloud template, we’ve included the name of the docker compose network that the agent container should join during initialisation.

6. Start the Environment

  • Stop/remove all prior containers that were previously launched, if you haven’t already done so
  • Start the services using docker-compose
$ cd $HOME/dev-env
$ sudo docker-compose up -d
$ sudo chmod a+rwx /var/run/docker.sock
  • To check the containers & internal IPs:
$ docker ps -q | xargs docker inspect \
--format='{{ .Name }} {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'

jenkins_master 172.16.238.4
github_mock 172.16.238.2
s3_mock 172.16.238.3
$ docker network ls 

NETWORK ID NAME DRIVER SCOPE
41b15682d16f dev-env_dev_ops bridge local

7. Testing the Environment

An initial shakedown procedure using a basic pipeline is described next.

Jenkins Pipeline Overview

The Jenkins pipeline outlined below aims to test the network/services can interact as expected.

  • Build the image using the repo’s Dockerfile and assign name/tag as hello_py:1
  • Run the image as a container on agent/slave and execute some commands from within this container, including hello_world.py
  • Push the hello_world.py app to our self hosted S3 service bucket, s3://my-app/

Create & Upload Components to Git

Add the following to first-repo:

  • Create a Dockerfile containing the necessary steps to derive an image based on python:3.8, which we will use to test our python script
FROM python:3.8

# set the working directory in the container
WORKDIR /usr/src/app

# copy the content of the local src directory to the working directory
COPY hello_world.py .

Creating the pipeline

Note: In the pipeline definition, we need to reference the services via their internal docker network static IP addresses/ports.

  • Enter local-devops-net-test as the name of the pipeline

Running the Pipeline using Jenkinsfile from Git

To test that the pipeline can be executed using a Jenkinsfile from SCM:

  • Modify the pipeline configuration to use the Jenkinsfile from repository first-repo

Closing Remarks

In larger organisations, the DevOps team generally provides a range of offerings, including enablement of application development teams. This can often mean an application developer will have limited access/control over the deployment workflows for the respective environments.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store