Deploy AWS Lambda Functions to Local/Cloud Environments with Jenkins and Terraform

Tony Tannous
11 min readMar 24, 2021

--

In a previous post, I documented learnings from setting up a local Devops environment.

This article aims to apply a potential use-case for this environment by including support for Terraform. The following overview of the goals to be achieved should help you in deciding whether to read on:

Tweak/expand the local Devops environment, mentioned above, to include support for Terraform and Localstack.

Use the environment to create a Continuous Delivery Jenkins pipeline to deploy a Python-based Lambda function to Localstack and AWS Cloud.

The Lambda function is to be triggered by an S3 event, in response to a csv file being uploaded to an input bucket. Once triggered, the Lambda processes the file and exports results to an output bucket.

The key topics and learning objectives are:

  • Use docker-compose to bring up a stack comprising of 3 networked containers running the following services
    Gogs — locally hosted Git SCM
    Localstack — mock AWS environment
    Jenkins node — CI/CD
  • Using the stack, create a Jenkins Continuous Delivery pipeline to package and deploy a sample Python-based Lambda function using Terraform
  • Configure the pipeline to deploy the Lambda to
    Localstack
    ◽ AWS Cloud environment, deployed only after interactive prompt confirmation
  • The Lambda’s primary purpose is to load a csv file from an input bucket and transpose the contents, using a Pandas Dataframe, into a given output bucket
  • Configure the Lambda function to be triggered by an s3:ObjectCreated event, attached to the input bucket configuration
  • Set up a Gogs Webhook, to automatically trigger the Jenkins job when changes are pushed to locally hosted SCM
  • Test the deployed Lambda function by uploading sample file to the input bucket

All components required to achieve the final framework are included in a git repo, which forms the basis for establishing the set up.

Preparing Local Environment with docker-compose

Clone Git Repository

Clone the repository mentioned above from github. This repository will eventually make its way onto our local, self-hosted Git SCM, aka Gogs.

$ cd $HOME
$ git clone \
https://github.com/tonys-code-base/jenkins_deploy_lambda_terraform.git
$ cd jenkins-deploy-lambda-terraform
$ rm -fr .git

Create Host Directories

Create host directories to be used for persisting data from container mounts.

$ mkdir -p /tmp/gogs/data
$ mkdir -p /tmp/jenkins_home
$ mkdir -p /tmp/localstack

Build Custom Jenkins Docker Image

Build the custom Jenkins docker image, which includes the AWS CLI, Terraform and Python distribution.

$ cd $HOME/jenkins-deploy-lambda-terraform
$ docker build -t jenkins/jenkins:master .

Bring up the Stack

Using docker-compose, bring up the stack using:

$ AWS_CRED=$HOME/.aws TMPDIR=/tmp/localstack docker-compose up -d

This brings up 3 containers, accessible from the host via the following endpoints:

Gogs: http://localhost:10080
Jenkins: http://localhost:8085
Localstack: http://localhost:4566

or, if accessing from internal docker-compose network:

Gogs: http://github_mock:3000
Jenkins: http://jenkins_master:8080
Localstack: http://localstack:4566

Note on AWS Credentials

In the command used to bring up the stack, AWS_CRED is set to host location of AWS config/credentials folder ($HOME/.aws), which is mounted into the home directory (/var/jenkins_home) of the user running within the Jenkins container.

version: '3.7'
services:
...
jenkins_master:
...
volumes:
...
- ${AWS_CRED}:/var/jenkins_home/.aws

Two profiles should be added to the host’s $HOME/.aws/config file to support deployment to each target environment, i.e Localstack (aws_local), and AWS Cloud (aws_cloud).

$HOME/.aws/config:

[profile aws_local]
region = us-east-1
output = json
aws_access_key_id = test
aws_secret_access_key = test

[profile aws_cloud]
region = us-east-1
output = json
aws_access_key_id = <your AWS key>
aws_secret_access_key = <your AWS secret>

Configure Gogs SCM and Upload Code Repository

Configure Gogs — Locally Hosted Git

To configure Gogs as a local Git SCM host, visit http://localhost:10080 and complete the configuration. Refer to Appendix A for config values. The remainder of the article uses these values.

Once Gogs has been configured, we can create an empty repository to host our source.

Create Git Repository on Gogs Host and Push Code

From http://localhost:10080, create an empty repository named jenkins-deploy-lambda-terraform , leaving "Initialise this repository with selected files and template" unticked.

Initialise and push the local repo we cloned earlier on, to our self hosted remote:

The repository should be accessible via:

http://localhost:10080/git-user/jenkins-deploy-lambda-terraform

Overview of Repository Components

The following is a listing of the repository source.

.
├── Dockerfile
├── Jenkinsfile
├── Pipfile
├── Pipfile.lock
├── my-lambda
│ ├── iam
│ │ ├── lambda_iam_policy.json
│ │ └── lambda_iam_trust_policy.json
│ └── src
│ └── my-lambda.py
├── lambda-input-test-file.csv
├── aws_local
│ ├── main.tf
│ ├── output.tf
│ └── variables.tf
├── aws_cloud
│ ├── main.tf
│ ├── output.tf
│ └── variables.tf
├── docker-compose.yml

The main components are:
my-lambda/*: Lambda source code and IAM policy definitions
lambda-input-test-file.csv: Test input file for Lambda function
aws_local/*: Terraform components for deployment to Localstack
aws_cloud/*: Terraform components for deployment to AWS Cloud account
Jenkinsfile: Contains the deployment pipeline

Walkthrough of Terraform Code for Lambda Deployment

The folder structures hosting Terraform code, aws_local and aws_cloud, are almost identical, therefore a walkthrough of only one of these is covered (aws_local). The differences are mainly related to provider configuration, i.e. Localstack vs AWS Cloud.

aws_local/variables.tf

This file contains the Terraform variables along with comments describing their purpose.

# AWS Regionvariable "aws_region" {
type = string
default = "us-east-1"
}

# Application name to include in names of AWS resources

variable "app_name" {
type = string
default = "transposer"
}

# AWS Account (for Localstack, value is zeroes)

variable "aws_account" {
type = string
default = "000000000000"
}

# AWS profile to source credentials

variable "aws_profile" {
type = string
default = "aws_local"
}

# Source name and location containing Lambda zip.
# Zip is created during the Jenkins pipeline.


variable "lambda_zip" {
type = string
default = "../dist/src.zip"
}

# Deployment target - AWS Cloud (aws_cloud)
# or Localstack (aws_local)


variable "env" {
description = "Env - localstack or cloud"
type = string
default = "aws_local"
}

aws_local/main.tf

Contains Terraform resources required for deploying the Lambda. Comments/descriptions are included for each resource.

# Localstack provider configuration

provider "aws" {
region = var.aws_region
profile = var.aws_profile
s3_force_path_style = true
skip_credentials_validation = true
skip_metadata_api_check = true
skip_requesting_account_id = true
insecure = true

endpoints {
acm = "https://localstack:4566"
apigateway = "https://localstack:4566"
cloudformation = "https://localstack:4566"
cloudwatch = "https://localstack:4566"
dynamodb = "https://localstack:4566"
ec2 = "https://localstack:4566"
es = "https://localstack:4566"
firehose = "https://localstack:4566"
iam = "https://localstack:4566"
kinesis = "https://localstack:4566"
kms = "https://localstack:4566"
lambda = "https://localstack:4566"
rds = "https://localstack:4566"
route53 = "https://localstack:4566"
s3 = "https://localstack:4566"
secretsmanager = "https://localstack:4566"
ses = "https://localstack:4566"
sns = "https://localstack:4566"
sqs = "https://localstack:4566"
ssm = "https://localstack:4566"
stepfunctions = "https://localstack:4566"
sts = "https://localstack:4566"
}
}

# Create IAM role with trust relationship for lambda service

resource "aws_iam_role" "iam_role_lambda" {
name = var.app_name

assume_role_policy = file("${path.module}/../my-lambda/iam/lambda_iam_trust_policy.json")
}

# IAM policy template

data "template_file" "lambda_iam_policy" {
template = file("${path.module}/../my-lambda/iam/lambda_iam_policy.json")
vars = {
app_name = var.app_name
aws_region = var.aws_region
aws_account = var.aws_account
}
}

# Create the policy

resource "aws_iam_policy" "iam_policy" {
name = var.app_name
path = "/"
description = "IAM policy for lambda"
policy = data.template_file.lambda_iam_policy.rendered
}

# Attach policy to IAM role

resource "aws_iam_role_policy_attachment" "policy_for_lambda" {
role = aws_iam_role.iam_role_lambda.name
policy_arn = aws_iam_policy.iam_policy.arn
}

# Input bucket used by Lambda

resource "aws_s3_bucket" "in_bucket" {
bucket = "${var.app_name}-input"
}

# Output bucket used by lambda

resource "aws_s3_bucket" "out_bucket" {
bucket = "${var.app_name}-output"
}

# Create Lambda function

resource "aws_lambda_function" "func" {
filename = "../dist/${var.lambda_zip}"
function_name = var.app_name
role = aws_iam_role.iam_role_lambda.arn
handler = "my-lambda.lambda_handler"

depends_on = [
aws_iam_role_policy_attachment.policy_for_lambda
]

source_code_hash = filebase64sha256("../dist/${var.lambda_zip}")

runtime = "python3.8"

# Lambda function environment variables

environment {
variables = {
OUTPUT_BUCKET = "${var.app_name}-output"
DEPLOYMENT_TARGET = var.env
}
}
}

# Add permissions to allow s3 to trigger lambda function

resource "aws_lambda_permission" "allow_s3_trigger" {
statement_id = "AllowTriggerFromS3"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.func.arn
principal = "s3.amazonaws.com"
source_arn = aws_s3_bucket.in_bucket.arn
}

# s3 event config:
# Configure bucket notification for triggering lambda
# when s3 csv file uploaded to input bucket


resource "aws_s3_bucket_notification" "bucket_notification" {
bucket = aws_s3_bucket.in_bucket.id

lambda_function {
lambda_function_arn = aws_lambda_function.func.arn
id = "s3_trigger"
events = ["s3:ObjectCreated:*"]
filter_suffix = "csv"
}

depends_on = [aws_lambda_permission.allow_s3_trigger]
}

Configuring Terraform Variables for AWS Cloud Deployment

Before deployment to AWS Cloud, aws_cloud/variables.tf needs to be updated to reflect details of the target AWS Account.

aws_cloud/variables.tf

Modify this file, replacing xxxxxxxxxxxx with the target AWS Cloud account:

variable "aws_account" {
type = string
default = "xxxxxxxxxxxx"
}

Jenkins Configuration and Deployment Pipeline

Configure Jenkins and Install Gogs Plugin

Visit http://localhost:8085 and complete the Jenkins setup.

Once complete, add the Gogs plugin via Jenkin's Plugin manager. This plugin is required for establishing Webhooks between Gogs and Jenkins.

Create New Jenkins Job

Create a new Jenkins job:

  • New Item Pipeline
  • Enter an item name → jenkins-deploy-lambda-terraform
  • Choose Pipeline for job type

On the job configuration screen, select/enter the following values

Gogs Webhook

  • Enable option → Use Gogs secret
  • Secret → <choose a password> (example, mysecretlambda)
    This will be required in the following section, when setting up the Webhook from within Gogs

Build Triggers

  • Enable option → Build when a change is pushed to Gogs

Pipeline

  • Definition → Pipeline script from SCM
  • SCM → Git
  • Repository URL
http://github_mock:3000/git-user/jenkins-deploy-lambda-terraform.git

Note: we use the container’s internal host and port in the repo URL

  • Credentials → Add → Jenkins → kind → username with password
    username
    git-user
    password <your password as chosen during Gogs config>
    ID git-creds

Note: username and password should correspond to your Gogs git username and password, as created during the configuration of Gogs (if you've used the values mentioned in the Appendix, then username is git-user)

  • Branches to build → */master
  • Script Path → Jenkinsfile

Configure Gogs Webhook

To configure the Gogs Webhook for the new job,
jenkins-deploy-lambda-terraform:

  • Head back to http://localhost:10080
  • Go to the repository’s Settings and choose Webhooks from the left panel
  • For Payload URL, the format is as follows:

http://<jenkins_host>/gogs-webhook/?job=<jenkins_job_name>

which resolves to,

http://jenkins_master:8080/gogs-webhook/?job=jenkins-deploy-lambda-terraform

Again, we use the container’s internal network host and port in the URL.

  • Application Content application/json
  • Secret <secret for the webhook>
    The secret must match what was entered during the Jenkin’s job setup for Gog’s Webhook, e.g.: mysecretlambda)
  • When should this webhook be triggered? Just the push event
  • Add Webhook

Once complete, any changes pushed to the repository should automatically trigger the Jenkins job.

We can perform a test from Gogs to ensure the Webhook has been correctly configured. Gogs offers a Test Delivery option which attempts to trigger the job on-request.

Test Gogs->Jenkins Webhook

To test the Webhook,

  • Go to Gogshttp://localhost:10080
  • Go to Settings for repo, jenkins-deploy-lambda-terraform
  • Under Webhooks, choose the Webhook we just configured
  • Click Test Delivery
  • This should trigger the Jenkins job
  • Go back to Jenkins http://localhost:8085 and view the build history for our job
  • The job should automatically deploy the Lambda to Localstack (i.e, the first 4 stages automatically executed)
  • The very last stage (stage 5), deploys the Lambda to configured AWS Cloud account only when Proceed is selected in response to an interactive prompt. The stage automatically aborts after timeout duration (60sec) if a response has not been received

Test Deployed Lambda using Localstack

With the Lambda successfully deployed to Localstack, we run a test using the supplied input file (contains 3 sample rows with 5 columns). To trigger the Lambda, copy the file to the S3 input bucket:

$ cd $HOME/jenkins-deploy-lambda-terraform$ aws --profile localstack \
--endpoint-url http://localhost:4566 \
s3 cp lambda-input-test-file.csv \
s3://transposer-input/lambda-input-test-file.csv

This should trigger the function in a separate container. The container is not removed after execution completes, this allows for troubleshooting errors by examining the container logs.

If the Lambda executes successfully, a transposed version of the input file should be located at
s3://transposer-output/lambda-input-test-file_transposed.csv.

A quick look at the contents should show 5 rows, 3 columns:

$ aws --profile localstack \
--endpoint-url http://localhost:4566 \
s3 cp \
s3://transposer-output/lambda-input-test-file_transposed.csv -

Output:

column1,1,6
column2,2,7
column3,3,8
column4,4,9
column5,5,10

From here onwards, Lambda code changes applied to local repo, and pushed to the self-hosted Git instance, would automatically trigger the Jenkins pipeline for redeployment.

Deploying to AWS Cloud

Once the Lambda is tested successfully on Localstack, it can be deployed to an AWS Cloud account for further testing by responding with “Proceed” to the interactive prompt for the last stage of the Jenkins pipeline. The additional testing would be targeted at covering off on aspects that cannot be practically tested in a local environment.

Conclusion

So what do we gain from all this?

My personal take-home points from having gone through the set up:

  • An understanding/appreciation of technologies used by Devops
  • An insight into Terraform
  • Having my own local environment which allows for more destructive testing
  • The typical AWS resources associated with creating and implementing a Lambda
  • Working with Lambda S3 event triggers
  • Packaging Python-based Lambdas containing non-standard libraries
  • Reducing AWS costs by using Localstack for the majority of testing and keeping Cloud usage to a minimum
  • Minimising manual tasks by incorporating automation tweaks — such as the Gogs Webhook

Why did you use Terraform and not Ansible?

Some may argue that Ansible is a more suitable option for the task. I’m sure there are several reasons for and against the use of Ansible vs Terraform. This article was focused more on the learning “how to”, by using Terraform, instead of the “why nots”.

Appendix

Configure Gogs Git Service

Visit http://localhost:10080/ and complete the initial setup. Below are sample config values.

After selecting install Gogs, a redirect to http://localhost:3000/user/login occurs, however, since we have the container's internal port of 3000 mapped to our external host port 10080, we need to change the port in the URL to 10080.

Sign up

The first user to go through the signup process is allocated as Administrator. The below shows sign-up of initial user git-user.

--

--

Tony Tannous

Learner. Interests include Cloud and Devops technologies.