Setting up Terraform Cloud to Work with Localstack

Tony Tannous
6 min readJan 7, 2021

Localstack has become popular in the DevOps and AWS local testing space.

I was looking to gain some basic experience with Terraform Cloud and using Localstack as the AWS provider with minimal or no firewall/router configuration changes. To achieve this would require localhost be accessible from the Internet.

This article is a guide on using tunnelling service Tunnel.dev to achieve the set up by enabling access to localhost via a public URL solely for the purposes of sandpit testing.

Signup for a Terraform Cloud Account

Hashicorp Terraform Cloud offers access to a free account with feature limitations. Thankfully this was sufficient to allow for exploration and testing.

To sign up, visit: Terraform Cloud — Signup. After logging into your account, you’ll be asked to provide an organisation name. For the examples throughout the article, developer-poc is used.

The remaining steps required for setting up a workspace will discussed later in the article.

Run Localstack Instance

It’s assumed you’re running the latest version of Localstack, where access to the AWS services is enabled via a single EDGE port (default 4566). This allows us to access services from a single port as opposed to earlier versions where each service ran on a dedicated port.

If you don’t have Localstack installed, you can use the following
docker-compose.yml file to launch an instance.

docker-compose.yml

version: '3.7'services:
localstack:
container_name: localstack-main
image: localstack/localstack:latest
ports:
- "4566:4566"
environment:
- SERVICES=${SERVICES-acm,apigateway
,cloudformation,cloudwatch
,dynamodb,dynamodbstreams,ec2,es,events
,firehose,iam,kinesis,kms,lambda,rds
,route53,s3,s3api,secretsmanager,ses,sns
,sqs,ssm,stepfunctions,sts}
- DEBUG=${DEBUG- }
- LAMBDA_EXECUTOR=${LAMBDA_EXECUTOR-docker}
- KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY-0}
- DOCKER_HOST=unix:///var/run/docker.sock
- DATA_DIR=/tmp/localstack/data
- HOST_TMP_FOLDER=${HOST_MNT_ROOT}/tmp
- TMPDIR=/tmp/localstack/tmp
- LAMBDA_REMOTE_DOCKER=false
volumes:
- "${HOST_MNT_ROOT}/data:/tmp/localstack/data"
- "/var/run/docker.sock:/var/run/docker.sock"
- "${HOST_MNT_ROOT}/tmp:/tmp/localstack/tmp"

Once you’ve saved a copy of the file locally, bring up the stack by running:

$ mkdir -p $HOME/localstack/data
$ mkdir $HOME/localstack/tmp
$ HOST_MNT_ROOT=$HOME/localstack docker-compose up -d

Set Up Tunnel to Localstack Instance

Tunnel.dev provides a free plan which also allows for selection of a unique custom subdomain.

  • Download the latest binary for your OS from the git releases page: https://github.com/agrinman/tunnelto/releases
  • The following example downloads the latest Linux release as of now and installs the binary to $HOME/tunneldev/tunnelto
$ mkdir $HOME/tunneldev
$ cd $HOME/tunneldev
$ wget https://github.com/agrinman/tunnelto/releases/download/0.1.12/tunnelto-linux.tar.gz
$ tar -xvf tunnelto-linux.tar.gz
$ chmod 755 tunnelto
  • The following invocation example of tunnelto establishes a tunnel betweenhttps://<SUBDOMAIN>.tunnelto.dev and localhost:4566 using a specified subdomain (in this case uniquelocalstack):
$ $HOME/tunneldev/tunnelto --port 4566 --subdomain uniquelocalstack
  • --subdomain option has been specified to request for subdomain uniquelocalstack
  • Note, subdomains are not reserved and subsequent requests for the same name may fail if already allocated
  • The following displays sample output from the above command
⣷ Success! Remote tunnel created on: 
https://uniquelocalstack.tunnelto.dev
=> Forwarding to localhost:4566Local Inspect Dashboard: http://localhost:38047
  • The external URL is: https://uniquelocalstack.tunnelto.dev
  • The output also displays a link to a Local Inspect Dashboard for monitoring incoming requests: http://localhost:38047
  • Make a note of these URLs, they will be referenced in subsequent sections
  • We can now go ahead and test calls to our Localstack instance from the public URL. Assuming AWS credentials for Localstack are configured with profile name localstack, we can run the following to list S3 buckets::
$ aws --profile localstack \
--endpoint-url https://uniquelocalstack.tunnelto.dev s3 ls
  • By visiting the URL assigned to Local Inspect Dashboard, we can see the GET request associated with the s3 bucket ls:

http://localhost:38047

Configure Sample Terraform Git Repo for Testing

The following sample git repository, to be used for testing, contains basic Terraform code for creating an s3 bucket:
https://github.com/tonys-code-base/terraform-cloud-localstack.git.

  • Fork/copy the repo to your Github account
  • Replace all occurrences of AWS provider service endpoints, i.e.
    <TUNNEL URL> in file main.tf, with your tunnel URL as noted earlier.

main.tf

...
...
endpoints {
acm = "https://<TUNNEL URL>"
apigateway = "https://<TUNNEL URL>"
cloudformation = "https://<TUNNEL URL>"
cloudwatch = "https://<TUNNEL URL>"
dynamodb = "https://<TUNNEL URL>"
ec2 = "https://<TUNNEL URL>"
es = "https://<TUNNEL URL>"
firehose = "https://<TUNNEL URL>"
iam = "https://<TUNNEL URL>"
kinesis = "https://<TUNNEL URL>"
kms = "https://<TUNNEL URL>"
lambda = "https://<TUNNEL URL>"
rds = "https://<TUNNEL URL>"
route53 = "https://<TUNNEL URL>"
s3 = "https://<TUNNEL URL>"
secretsmanager = "https://<TUNNEL URL>"
ses = "https://<TUNNEL URL>"
sns = "https://<TUNNEL URL>"
sqs = "https://<TUNNEL URL>"
ssm = "https://<TUNNEL URL>"
stepfunctions = "https://<TUNNEL URL>"
sts = "https://<TUNNEL URL>"
}
...
...
  • Commit and push your changes

Set Up Terraform Cloud

Create Workspace

Now that our Terraform code is configured to use the tunnel, we can go back to Terraform Cloud and start setting up a cloud workspace.

  • Create a new workspace by choosing Workspaces --> New Workspace
  • Choose Version control workflow when prompted for the workflow type
  • For the VCS (Version control provider/source), choose github.com
  • You will be prompted to authenticate with your Github account in order to access your repositories
  • Once authentication completes, choose the repo we just edited, terraform-cloud-localstack
  • The workspace name defaults to terraform-cloud-localstack
  • Select create workspace to trigger the workspace creation/configuration

Execute the Terraform Plan

  • Run the plan by selecting Queue plan
  • Respond to the Confirm and Apply prompt by adding in a comment
  • Finally, choose Confirm Plan
  • Once the Terraform plan has executed successfully, you’ll be able to view the logs and State versions

Verifying Deployment of Plan Components

  • To confirm the bucket has been created on our Localstack instance we use either the localhost AWS endpoint URL,
$ aws --profile localstack \
--endpoint-url http://localhost:4566 s3 ls
2021-01-02 08:36:44 sandpit-sample

or by using the public address associated with your tunnel:

$ aws --profile localstack --endpoint-url <TUNNEL URL> s3 ls2021-01-02 08:36:44 sandpit-sample
  • The Local Inspector Dashboard, as noted earlier, should show the incoming requests from the public address
  • Here is the output for my specific instance as it appears at Dashboard URL: http://localhost:38047
  • Subsequent changes to code within the git repository will automatically be reflected in the Cloud workspace

Final Notes

As you might have already gathered, there are security implications of publicly exposing your stack’s endpoint through a tunnelling service provider. For this reason, the recommended use case would be for sandpit testing of components in an environment which does not host sensitive code/data.

It’s also worth noting, Localstack does not currently support authentication, i.e. any supplied values for AWS secret/access_key values are accepted.

There are several providers which offer a similar service as Tunnel.dev, including ngrok and localtunnel. These are worth exploring, and adapting them to work in the context of this article should be fairly straightforward.

--

--

Tony Tannous

Learner. Interests include Cloud and Devops technologies.