AWS Deployments using Serverless and Localstack
If you’ve interfaced with Devops teams, chances are that you’re well aware of tools used to manage infrastructure and application releases. Serverless holds a place on this long list.
Having worked with AWS, I became interested in exploring how Serverless fits into this space. This article outlines the learnings from setting up Serverless to work with Localstack for the purposes of testing application provisioning.
The aim is to create and deploy a service helloservice
comprising of a basic hello
Lambda function.
Installing Serverless
The following command installs Serverless onto a Linux host:
$ curl -o- -L https://slss.io/install | bash
As part of the process, the serverless binary path is appended to the PATH
environment variable within ~/.bashrc
, so we run the following to refresh our profile.
$ source ~/.bashrc
Create New Service using Serverless Templates
A range of “Hello World” templates are readily available for creating services from scratch.
You can also list these by running:
$ serverless create --help
To create our helloservice
Python3-based service/stack at path $HOME/helloService
, using template aws-python3
, we run the following:
$ cd $HOME
$ serverless create --template aws-python3 --path helloService
Configure the Service to use Localstack
1. Install Localstack Plugin
The serverless-localstack plugin will need to be installed to allow for local deployments to Localstack.
Assuming you have nodejs/npm installed, add the plugin by running:
$ cd $HOME/helloService
$ npm install --save-dev serverless-localstack
2. Configure AWS Credentials
Create AWS credentials profile named localstack
:
$ aws configure --profile localstack
AWS Access Key ID : test
AWS Secret Access Key : test
Default region name [us-east-1]:
Default output format [json]: json
3. Update serverless.yml
The file containing the service definition$HOME/helloService/serverless.yml
will need to be modified to include AWS credentials, plugins to be used, and details of our Localstack instance.
Modify the file as follows.
- Add Localstack
plugins
details:
...
service: helloservice
...
...
plugins:
- serverless-localstack
- Add AWS
profile
andstage
details as shown below:
provider:
name: aws
stage: local
runtime: python3.8
profile: localstack
- Add the following
custom
section to the end of the file:
custom:
localstack:
stages:
# Stages for which the plugin should be enabled
- local
host: http://localhost
edgePort: 4566
autostart: true
lambda:
mountCode: True
docker:
sudo: False
Localstack Docker Compose
The docker-compose Localstack environment used for testing is described next.
1. Host Directories
The following directories were created for volume mounts and storing docker-compose.yml
.
$ mkdir $HOME/localstack
$ mkdir $HOME/localstack/tmp
$ mkdir $HOME/localstack/data
2. docker-compose.yml
The complete $HOME/localstack/docker-compose.yml
has the following definition. Note, not all the AWS services mentioned in SERVICES
variable are required, however, they were included for completeness and further discovery work.
$HOME/localstack/docker-compose.yml
:
version: '3.7'
# HOST_MNT_ROOT=$HOME/localstack docker-compose up -d
services:
localstack:
container_name: localstack-main
image: localstack/localstack:latest
ports:
- "4566:4566"
environment:
- SERVICES=${SERVICES-serverless
,acm,apigateway,cloudformation,cloudwatch
,dynamodb,dynamodbstreams,ec2,es,events
,firehose,iam,kinesis,kms,lambda,rds
,route53,s3,s3api,secretsmanager,ses,sns
,sqs,ssm,stepfunctions,sts}
- DEBUG=${DEBUG- }
- LAMBDA_EXECUTOR=${LAMBDA_EXECUTOR-docker}
- KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY-0}
- DOCKER_HOST=unix:///var/run/docker.sock
- DATA_DIR=/tmp/localstack/data
- HOST_TMP_FOLDER=${HOST_MNT_ROOT}/tmp
- TMPDIR=/tmp/localstack/tmp
- LAMBDA_REMOTE_DOCKER=false
volumes:
- "${HOST_MNT_ROOT}/data:/tmp/localstack/data"
- "/var/run/docker.sock:/var/run/docker.sock"
- "${HOST_MNT_ROOT}/tmp:/tmp/localstack/tmp"
3. Starting Localstack
Launch Localstack by running:
$ cd $HOME/localstack
$ HOST_MNT_ROOT=$HOME/localstack docker-compose up -d
Deploying the Service to Localstack
Once Localstack is up and running, we can trigger the deployment.
$ cd $HOME/helloService
$ serverless deploy
If all goes well, the following output is returned, confirming creation of stack helloservice-local
for the service, helloservice
:
Service Information
service: helloservice
stage: local
region: us-east-1
stack: helloservice-local
resources: 6
api keys:
None
endpoints:
None
functions:
hello: helloservice-local-hello
layers:
None
An interesting observation to note at this point relates to the generated CloudFormation templates located within $HOME/helloService/.serverless
. These give an insight into how Serverless works with CloudFormation to deploy the service components.
Invoking “hello” Lambda Function
To invoke our deployed hello
function, run the following:
$ serverless invoke -f hello
This should produce the following output:
{
"body": "{\"message\": \"Go Serverless v1.0!
Your function executed successfully!\", \"input\": {}}",
"statusCode": 200
}
Listing/Identifying Stack Components
To list the stack components, we can use the aws cli
.
1. Retrieve Stack Description:
$ aws --profile localstack \
--endpoint-url http://localhost:4566 \
cloudformation \
describe-stacks \
--stack-name helloservice-local
2. List Stack Resources:
$ aws --profile localstack \
--endpoint-url http://localhost:4566 \
cloudformation \
describe-stack-resources \
--stack-name helloservice-local
Further details for each LogicalResourceId
returned from the above can be obtained by running the command below:
$ aws --profile localstack \
--endpoint-url http://localhost:4566 \
cloudformation \
describe-stack-resource \
--stack-name helloservice-local \
--logical-resource-id <LogicalResourceId Value>
3. Display the Stack CloudFormation Template:
$ aws --profile localstack \
--endpoint-url http://localhost:4566 \
cloudformation \
get-template -\
-stack-name helloservice-local
4. View Logs
To view execution logs of our hello
function, start by identifying the log group:
$ aws --profile localstack \
--endpoint-url http://localhost:4566 \
cloudformation describe-stack-resources \
--stack-name helloservice-local
...
{
"logGroups": [
{
"logGroupName": "/aws/lambda/helloservice-local-hello",
...
From the above, our logGroupName
is /aws/lambda/helloservice-local-hello
.
Using the logGroupName
value, identify the associated logStream
.
$ aws --profile localstack \
--endpoint http://localhost:4566 \
logs describe-log-streams \
--log-group-name \
/aws/lambda/helloservice-local-hello
...
{
"logStreams": [
{
"logStreamName": "2020/12/17/[LATEST]61a70b90",
...
To view events for the stream listed above 2020/12/17/[LATEST]61a70b90
:
$ aws --profile localstack \
--endpoint http://localhost:4566 \
logs get-log-events \
--log-group-name /aws/lambda/helloservice-local-hello \
--log-stream-name 2020/12/17/[LATEST]61a70b90
...
{
"events": [
"message": "{'statusCode': 200, 'body':
'{\"message\": \"Go Serverless v1.0!
Your function executed successfully!\"
...
Undeploying/Removing the Stack
To rollback the deployed stack/service, run the following:
$ cd $HOME/helloService
$ serverless remove
Conclusion
This basic introduction gave some insights into the obvious benefits of choosing Serverless for provisioning applications. A lot went on behind the scenes in response to executing a few simple commands to create our service.
If you choose to continue testing the release onto AWS Cloud, refer to the guidelines at Configure Multiple AWS Profiles, which outline methods of working with multiple AWS accounts for the same service. The setup describes the definition of “stages” to represent environments . The serverless
cli can then be invoked, passing the --stage
option with the required runtime environment value.