AWS Deployments using Serverless and Localstack

If you’ve interfaced with Devops teams, chances are that you’re well aware of tools used to manage infrastructure and application releases. Serverless holds a place on this long list.

Having worked with AWS, I became interested in exploring how Serverless fits into this space. This article outlines the learnings from setting up Serverless to work with Localstack for the purposes of testing application provisioning.

The aim is to create and deploy a service helloservice comprising of a basic hello Lambda function.

Installing Serverless

$ curl -o- -L https://slss.io/install | bash

As part of the process, the serverless binary path is appended to the PATH environment variable within ~/.bashrc, so we run the following to refresh our profile.

$ source ~/.bashrc

Create New Service using Serverless Templates

You can also list these by running:

$ serverless create --help

To create our helloservice Python3-based service/stack at path $HOME/helloService, using template aws-python3, we run the following:

$ cd $HOME
$ serverless create --template aws-python3 --path helloService

Configure the Service to use Localstack

1. Install Localstack Plugin

Assuming you have nodejs/npm installed, add the plugin by running:

$ cd $HOME/helloService
$ npm install --save-dev serverless-localstack

2. Configure AWS Credentials

$ aws configure --profile localstack
AWS Access Key ID : test
AWS Secret Access Key : test
Default region name [us-east-1]:
Default output format [json]: json

3. Update serverless.yml

Modify the file as follows.

  • Add Localstack plugins details:
...
service: helloservice
...
...
plugins:
- serverless-localstack
  • Add AWS profile and stage details as shown below:
provider:
name: aws
stage: local
runtime: python3.8
profile: localstack
  • Add the following custom section to the end of the file:
custom:
localstack:
stages:
# Stages for which the plugin should be enabled
- local
host: http://localhost
edgePort: 4566
autostart: true
lambda:
mountCode: True
docker:
sudo: False

Localstack Docker Compose

1. Host Directories

$ mkdir $HOME/localstack
$ mkdir $HOME/localstack/tmp
$ mkdir $HOME/localstack/data

2. docker-compose.yml

$HOME/localstack/docker-compose.yml :

version: '3.7'
# HOST_MNT_ROOT=$HOME/localstack docker-compose up -d
services:
localstack:
container_name: localstack-main
image: localstack/localstack:latest
ports:
- "4566:4566"
environment:
- SERVICES=${SERVICES-serverless
,acm,apigateway,cloudformation,cloudwatch
,dynamodb,dynamodbstreams,ec2,es,events
,firehose,iam,kinesis,kms,lambda,rds
,route53,s3,s3api,secretsmanager,ses,sns
,sqs,ssm,stepfunctions,sts}
- DEBUG=${DEBUG- }
- LAMBDA_EXECUTOR=${LAMBDA_EXECUTOR-docker}
- KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY-0}
- DOCKER_HOST=unix:///var/run/docker.sock
- DATA_DIR=/tmp/localstack/data
- HOST_TMP_FOLDER=${HOST_MNT_ROOT}/tmp
- TMPDIR=/tmp/localstack/tmp
- LAMBDA_REMOTE_DOCKER=false
volumes:
- "${HOST_MNT_ROOT}/data:/tmp/localstack/data"
- "/var/run/docker.sock:/var/run/docker.sock"
- "${HOST_MNT_ROOT}/tmp:/tmp/localstack/tmp"

3. Starting Localstack

$ cd $HOME/localstack
$ HOST_MNT_ROOT=$HOME/localstack docker-compose up -d

Deploying the Service to Localstack

$ cd $HOME/helloService
$ serverless deploy

If all goes well, the following output is returned, confirming creation of stack helloservice-local for the service, helloservice:

Service Information
service: helloservice
stage: local
region: us-east-1
stack: helloservice-local
resources: 6
api keys:
None
endpoints:
None
functions:
hello: helloservice-local-hello
layers:
None

An interesting observation to note at this point relates to the generated CloudFormation templates located within $HOME/helloService/.serverless. These give an insight into how Serverless works with CloudFormation to deploy the service components.

Invoking “hello” Lambda Function

$ serverless invoke -f hello

This should produce the following output:

{
"body": "{\"message\": \"Go Serverless v1.0!
Your function executed successfully!\", \"input\": {}}",
"statusCode": 200
}

Listing/Identifying Stack Components

1. Retrieve Stack Description:

$ aws --profile localstack \
--endpoint-url http://localhost:4566 \
cloudformation \
describe-stacks \
--stack-name helloservice-local

2. List Stack Resources:

$ aws --profile localstack \
--endpoint-url http://localhost:4566 \
cloudformation \
describe-stack-resources \
--stack-name helloservice-local

Further details for each LogicalResourceId returned from the above can be obtained by running the command below:

$ aws --profile localstack \
--endpoint-url http://localhost:4566 \
cloudformation \
describe-stack-resource \
--stack-name helloservice-local \
--logical-resource-id <LogicalResourceId Value>

3. Display the Stack CloudFormation Template:

$ aws --profile localstack \
--endpoint-url http://localhost:4566 \
cloudformation \
get-template -\
-stack-name helloservice-local

4. View Logs

$ aws --profile localstack \
--endpoint-url http://localhost:4566 \
cloudformation describe-stack-resources \
--stack-name helloservice-local

...
{
"logGroups": [
{
"logGroupName": "/aws/lambda/helloservice-local-hello",
...

From the above, our logGroupName is /aws/lambda/helloservice-local-hello.

Using the logGroupNamevalue, identify the associated logStream.

$ aws --profile localstack \
--endpoint http://localhost:4566 \
logs describe-log-streams \
--log-group-name \
/aws/lambda/helloservice-local-hello

...
{
"logStreams": [
{
"logStreamName": "2020/12/17/[LATEST]61a70b90",
...

To view events for the stream listed above 2020/12/17/[LATEST]61a70b90:

$ aws --profile localstack \
--endpoint http://localhost:4566 \
logs get-log-events \
--log-group-name /aws/lambda/helloservice-local-hello \
--log-stream-name 2020/12/17/[LATEST]61a70b90

...
{
"events": [
"message": "{'statusCode': 200, 'body':
'{\"message\": \"Go Serverless v1.0!
Your function executed successfully!\"
...

Undeploying/Removing the Stack

$ cd $HOME/helloService
$ serverless remove

Conclusion

If you choose to continue testing the release onto AWS Cloud, refer to the guidelines at Configure Multiple AWS Profiles, which outline methods of working with multiple AWS accounts for the same service. The setup describes the definition of “stages” to represent environments . The serverless cli can then be invoked, passing the --stage option with the required runtime environment value.

Learner. Interests include Cloud and Devops technologies.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store