Setup a Jenkins Local DevOps Environment using Docker and WSL2

Tony Tannous
12 min readAug 9, 2020

The main objective of this article is to demonstrate a procedure for building a local DevOps mock environment on a Windows 10 Pro (version 2004) host using WSL2 & Docker.

The local environment we aim to build comprises the following containers/images:

Docker-compose will be used to network the containers using static IP addresses within a docker network.

Before running the network, each image/container will be discussed and configured in isolation (where required). This will hopefully provide an insight of each component.

Once the environment is up and running, a Jenkins pipeline will be setup to achieve the following:

  • Test a simple Python app, which will be hosted in a repository on the local Git server
  • Deploy the script/app to target s3 bucket on our local mock s3 service, where in a real world scenario, might be a component of an AWS hosted process

Before Commencing…

The details of the base environment used for the setup are outlined below. You should be able to adapt the instructions to work with your preferred host OS.

  • Windows 10 Pro (2004)
  • WSL2 enabled with Ubuntu set as the default distro, i.e. the output of the following command is identical to the output generated on your system
PS C:\> wsl --list --verbose                 
NAME STATE VERSION
* Ubuntu Running 2
Description:    Ubuntu 20.04.1 LTS
Release: 20.04
Codename: focal
  • WSL2 based Docker engine has been enabled via Docker settings
  • All commands throughout the article have been executed from a WSL prompt, with related folders also residing on the WSL2 filesystem under parent $HOME/dev-env

1. Local Git Service for SCM — using Gogs

With the help of Gogs, we can spin up a local self hosted Git service as a container.

Running Gogs as a Local Git Server

To run a Gogs instance and have configuration/repo data persisted onto the host, run the following:

$ mkdir -p $HOME/dev-env/gogs/data
$ docker run -d --name=gogs -p 10022:22 -p 10080:3000 -v $HOME/dev-env/gogs/data:/data gogs/gogs

This serves Git’s Web interface at http://localhost:10080, and ssh at localhost:10022.

Configure Git Service

Visit http://localhost:10080/ and complete the initial setup. Below are sample config values.

After selecting install Gogs , a redirect to http://localhost:3000/user/login occurs, however, since we have the container's internal port of 3000 mapped to our external host port 10080, we need to change the port in the URL to 10080.

Signup to Local Service and Create Repo

The first user to go through the signup process is allocated as Administrator. The below shows sign-up of initial user git-user.

  • Create a new repo named first-repo

You will be able to work with the repo (from your host) in the same way as you’d normally work with “real” Github repos, with the exception that your remote repos are hosted at http://localhost:10080.

The following describes initialising of a local git repo on the WSL2 filesystem using either HTTP/SSH. To reap the performance benefits of the WSL2 filesystem, you can work with the repo after initialisation using Visual Studio Code (VSCode) and the Remote WSL Extension.

Method 1: WSL2 Using HTTP

  • To initialise the repo and create/push a README.md, run the following on the host:
$ mkdir -p $HOME/dev-env/my-repos/first-repo
$ cd $HOME/dev-env/my-repos/first-repo
$ echo "# My First Repo">README.md
$ git init
$ git add README.md
$ git commit -m "first commit"
$ git remote add origin http://git-user@localhost:10080/git-user/first-repo.git
$ git push -u origin master
  • You’ll be asked for local server’s Git credentials
Enumerating objects: 3, done.
Counting objects: 100% (3/3), done.
Writing objects: 100% (3/3), 208 bytes | 208.00 KiB/s, done.
Total 3 (delta 0), reused 0 (delta 0)

Password for 'http://git-user@localhost:10080':********
  • The repo’s config file at .git/config should contain the correct remote

cat $HOME/dev-env/my-repos/first-repo/.git/config

[remote "origin"]
url = http://git-user@localhost:10080/git-user/first-repo.git
fetch = +refs/heads/*:refs/remotes/origin/*
  • To add a git identity for the repo
$ git config -f $HOME/dev-env/my-repos/first-repo/.git/config \
--add user.name git-user

$ git config -f $HOME/dev-env/my-repos/first-repo/.git/config \
--add user.email git-user@localhost.com

Method 2: WSL2 Using SSH

If you prefer working with SSH, following the steps below to initialise and upload a README.md.

  • Ensure that your SSH public key has been installed onto local Git server
  • Update SSH config on the host to reflect the correct private key location to use for the SSH connection to the GIT service.

$HOME/.ssh/config

host localhost
IdentityFile ~/.ssh/id_rsa
  • To test SSH connectivity:
$ ssh -T git@localhost -p 10022

Hi there, You've successfully authenticated, but Gogs does not provide shell access.
If this is unexpected, please log in with password and setup Gogs under another user.
  • Create README.md and commit/push from host to remote:
$ mkdir -p $HOME/dev-env/my-repos/first-repo
$ cd $HOME/dev-env/my-repos/first-repo
$ echo "# My First Repo">README.md
$ git init
$ git add README.md
$ git commit -m "first commit"
$ git remote add origin ssh://git@localhost:10022/git-user/first-repo.git
$ git push -u origin master
  • Quick check of repo’s config file contents shows the correct ssh endpoint address being used.

cat $HOME/dev-env/my-repos/first-repo/.git/config

[remote "origin"]
url = ssh://git@localhost:10022/git-user/first-repo.git
fetch = +refs/heads/*:refs/remotes/origin/*
  • Once configuration is complete, you can stop/remove the container. We will be using it later on within the docker-compose service definitions
  • At this point, you can start to work with the repo in VSCode by running:
$ cd $HOME/dev-env/my-repos/first-repo
$ code .

Working with Repos Outside of WSL2

Most (but not all) Windows git clients can work the local service provided the cloned repository’s config file is correctly configured.

Start by cloning the repo onto the Windows file system and use with preferred Git client.

TortoiseGit worked well as a git client. It integrates into the Windows context menu with a choice of git client commands listed when right-clicking within the folder containing the local repo.

2. Local S3 Cloud Service

The procedure for setting up S3Ninja is described in detail at this link. Note, this is by no means a full AWS S3 stack and its primary purpose is to allow us to create local S3 URIs with basic ls/put/update/delete functionality for use with Jenkins files.

In summary, we will be transforming the following container into a docker-compose service:

$ mkdir $HOME/dev-env/ninja
$ docker run -d -p 9444:9000 -v $HOME/dev-env/ninja:/home/sirius/data scireum/s3-ninja:6.4

The configuration and Web UI for the service can be accessed at http://localhost:9444/ui. Using the portal, create an S3 public bucket named: s3://my-app via the WebUI at the above URL.

  • Make note of the Access/Secret Keys. These will be used to configure connection settings for use in pipelines.
  • Stop/remove the container

3. Jenkins Docker Plugin and Agent Image

The Jenkins Docker plugin allows for creation of “teardown” Docker containers via the use of an agent/slave node. Docker Cloud templates, are configured through the plugin to use a specific “slave” image (spawned as a container), which itself is capable of running containers and interacting with the docker host. For our configuration, the local Docker host daemon can be accessed through socket address:

unix:///var/run/docker.sock

The Docker agent will need access to this daemon and will also require the docker CLI binary installed. The daemon can be exposed within the container via a mount, i.e. -v /var/run/docker.sock:/var/run/docker.sock.

Creating a Custom Jenkins Agent Image

Using the Jenkins agent Dockerfile as a base, we can derive a custom agent image that bundles in a docker client static binary

Dockerfile:

FROM jenkins/agent:latest-stretch-jdk11

ARG user=jenkins

ENV DOCKERVERSION=19.03.12

LABEL Description="This image is derived from jenkins/agent openjdk11. \
It includes docker static binary"

USER root

RUN curl -fsSLO https://download.docker.com/linux/static/stable/x86_64/docker-${DOCKERVERSION}.tgz \
&& tar xzvf docker-${DOCKERVERSION}.tgz --strip 1 \
-C /usr/local/bin docker/docker \
&& rm docker-${DOCKERVERSION}.tgz

WORKDIR /home/${user}

USER ${user}
  • Build and tag the image using:
docker build --rm -t jenkins/agent:custom .

4. Jenkins Master Node

The following is what we aim to transform into a docker compose service later on, and will serve as the Jenkins master node.

$ mkdir $HOME/dev-env/jenkins_home

$ docker run -p 8080:8080 -p 50000:50000 -d \
-v $HOME/dev-env/jenkins_home:/var/jenkins_home jenkins/jenkins:lts
  • Run the container and configure Jenkins from address http://localhost:8080
  • The initial admin password can be located via the host, at:
$ cat $HOME/dev-env/jenkins_home/secrets/initialAdminPassword
  • During the setup process, choose Install Suggested Plugins and following the instructions below to add the required plugins for Docker/Docker Pipelines.

Install Docker, Docker Pipeline & AWS Plugins

  • Once config is complete, Install Docker Plugin by choosing Manage Jenkins --> Manage Plugins --> Available and searching for Docker.
  • Choose Install without restart
  • Repeat the procedure for:

Docker Pipeline:

andPipeline:AWS Steps:

Configure Docker Cloud Template

The agent needs to be configured so that it knows which docker host to connect to, and the associated docker image to use for creating the agent slave container.

To define the location of the docker host and corresponding image configuration:

  • Go to Manage Jenkins --> Manage Nodes and Clouds --> Configure Clouds
  • From the Add New Cloud drop down, choose Docker
  • Configure a new Docker Template as shown below.

Note, if you click Test Connection and receive a permission denied error, you can ignore for now

Name: docker
Labels (This value will be used in Pipeline definitions): docker-agent Docker Host URI: unix:///var/run/docker.sock
Docker Image: jenkins/agent:custom
Network: dev-env_dev_ops
Volumes: /var/run/docker.sock:/var/run/docker.sock
Remote File System Root: /home/jenkins/agent
Connect Method: Attach Docker Container
Pull Strategy: Never Pull
Enabled: Docker Cloud and Template are both enabled

Add Credential Provider for Git & AWS

Before configuring credentials for AWS Bucket and GIT repo, we need to add credential provider types.

For the mock setup, Jenkins is used as the provider with authentication types username/password for HTTP GIT, and AWS Credentials for S3.

  • Navigate to Manage Jenkins --> Configure Credential Providers add the types

Configure GIT & AWS Credentials on Jenkins Master Node

Configure Git credentials for the test repo first-repo. The below shows the setup for Git Username=git-user.

  • Manage Jenkins -> Manage Credentials --> New Item
  • Using the S3 Secret/Access keys taken from section Local S3 Cloud Service, configure access credentials for S3 bucket.

5. Create docker-compose.yml

The following compose file brings all the services together, into a docker network, with containers being assigned static IP addresses.

  • $HOME/dev-env/docker-compose.yml
version: '3.7'
services:
github_mock:
image: gogs/gogs
container_name: github_mock
volumes:
- ./gogs/data:/data
ports:
- "10022:22"
- "10080:3000"
networks:
dev_ops:
ipv4_address: 172.16.238.2

s3_mock:
image: scireum/s3-ninja:6.4
container_name: s3_mock
hostname: aws-s3
volumes:
- ./ninja:/home/sirius/data
ports:
- "9444:9000"
networks:
dev_ops:
ipv4_address: 172.16.238.3

jenkins_master:
image: jenkins/jenkins:lts
container_name: jenkins_master
depends_on:
- github_mock
- s3_mock
volumes:
- ./jenkins_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
links:
- s3_mock
- github_mock
networks:
dev_ops:
ipv4_address: 172.16.238.4
ports:
- "8080:8080"
- "50000:50000"

networks:
dev_ops:
ipam:
driver: default
config:
- subnet: "172.16.238.0/24"

The network name used for the stack is dev_ops with subnet 172.16.238.0/24. The static IPv4 addresses of each container on the network are as follows :

  • jenkins_master: 172.16.238.4
  • github_mock: 172.16.238.2
  • s3_mock : 172.16.238.3
  • The Jenkins docker agent container is not referenced in the docker-compose. It is provisioned as requested by pipeline node/agent directives. In the configuration of the agent Cloud template, we’ve included the name of the docker compose network that the agent container should join during initialisation.

6. Start the Environment

  • Stop/remove all prior containers that were previously launched, if you haven’t already done so
  • Start the services using docker-compose
$ cd $HOME/dev-env
$ sudo docker-compose up -d
$ sudo chmod a+rwx /var/run/docker.sock
  • Note: The sudo chmod a+rwx /var/run/docker.sock , included in the above block of commands, was required to address a Permission Denied issue when the attempting to connect from Jenkins node to the Docker daemon at internal container mount point /var/run/docker.sock. This should not be attempted in a production environment and was used as a workaround for this specific setup.
  • To check the containers & internal IPs:
$ docker ps -q | xargs docker inspect \
--format='{{ .Name }} {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'

jenkins_master 172.16.238.4
github_mock 172.16.238.2
s3_mock 172.16.238.3
  • The above, shows 3 containers with their respective IP address on the internal docker network created as part of the docker-compose.yml

Of course, we’re also able to access the above services/containers from our host, via the host to docker bridge.

To check the network has been created, run docker inspect:

$ docker network ls 

NETWORK ID NAME DRIVER SCOPE
41b15682d16f dev-env_dev_ops bridge local

We can see that network dev-env_dev_ops has been created with naming convention <project_directory>_<docker_compose_network>, where <project_directory> is the name of the directory containing the docker-compose.yml, and <docker_compose_network> is the name assigned to the network within the compose file.

7. Testing the Environment

An initial shakedown procedure using a basic pipeline is described next.

Jenkins Pipeline Overview

The Jenkins pipeline outlined below aims to test the network/services can interact as expected.

It contains the following stages/steps, which are executed on the designated node/agent.

Executed on the docker agent node:

  • Fetch the Git repo (first-repo)
  • Build the image using the repo’s Dockerfile and assign name/tag as hello_py:1
  • Run the image as a container on agent/slave and execute some commands from within this container, including hello_world.py

Executed on the Jenkins Master node:

  • Fetch the same git repo first-repo into the master node's workspace
  • Push the hello_world.py app to our self hosted S3 service bucket, s3://my-app/

Create & Upload Components to Git

Add the following to first-repo:

  • Create a hello_world.py python script, which contains print("Hello World")
  • Create a Dockerfile containing the necessary steps to derive an image based on python:3.8, which we will use to test our python script

Dockerfile:

FROM python:3.8

# set the working directory in the container
WORKDIR /usr/src/app

# copy the content of the local src directory to the working directory
COPY hello_world.py .
  • Commit/push our code and Dockerfile to the repo

Creating the pipeline

Note: In the pipeline definition, we need to reference the services via their internal docker network static IP addresses/ports.

  • Create a new pipeline from Jenkins Master node via the portal’s New Item selection.
  • Enter local-devops-net-test as the name of the pipeline
  • In the Definition section, choose Pipeline script
  • Paste the following pipeline definition into the Script input area
  • Once the pipeline has been saved, select Build Now from pipeline menu choices. If all goes well, you should see something similar to the following in the console output.

Running the Pipeline using Jenkinsfile from Git

To test that the pipeline can be executed using a Jenkinsfile from SCM:

  • Create a Jenkinsfile with the contents of the pipeline definition above, and commit/push to first-repo
  • Modify the pipeline configuration to use the Jenkinsfile from repository first-repo
  • Rerun the build, and if all goes well, the console output should log the Jenkinsfile was fetched from the git repo.

Closing Remarks

In larger organisations, the DevOps team generally provides a range of offerings, including enablement of application development teams. This can often mean an application developer will have limited access/control over the deployment workflows for the respective environments.

A developer with access to a local, mock environment provides the capability of emulating deployment pipelines (or parts of them) with the aim of understanding and diagnosing errors specific to a set of components being deployed.

If you are after alternatives to S3Ninja for hosting a local S3 service, minio is another great option to explore. Other choices include localstack, which comes bundled with a wide range of AWS services in addition to S3.

--

--

Tony Tannous

Learner. Interests include Cloud and Devops technologies.