Creating Different Environments With AWS CloudFormation

Recently, a question on stackoverflow.com popped up which asked for different environments with AWS CloudFormation. Here, I want to present my answer and give some more information about this topic. The code for this blog post can be found in my GitHub repository where I also have some more CloudFormation examples.

Different Environments

Before we start, let’s define what “different environments” mean. When developing software, you typically have multiple stages for your software product:

  • a development stage: reflects your current state of development and might be broken at some points
  • a pre-production stage: very similar to production stage (in the best case identical) to test things on a production-like system before going live
  • a production stage: contains the version of your code which is “live” and actually used by customers

A few years ago where technologies like AWS CloudFormation or even Docker weren’t available, developers created such environments manually. Sometimes they used scripts to automate certain steps. However, they often faced the problem that the stages were not similar enough. Hence sometimes errors and bugs were only detected after a deployment to production – which is often too late. Services like CloudFormation can reduce this problem if used correctly.

Advantages Of Using CloudFormation

To avoid problems like different stages, you can use template files and a service like CloudFormation. Template files contain the definition of your stack. CloudFormation reads these files and creates the resources based on your definition. Automatically. With the same output every time*. That’s the biggest advantage. But there are more.

Creating Environments with CloudFormation

Let’s see how you can use CloudFormation to create different environments. Basically, you need to parameterize your stack name and stack resources. To achieve this, I follow the naming structure [project]-[env]-[resource], e.g. hello-world-dev-my-bucket. The following code shows an example template where the bucket name is parameterized:

AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Deploys a simple AWS Lambda using different environments.

Parameters:
  Env:
    Type: String
    Description: The environment you're deploying to.

Resources:
  ServerlessFunction:
    Type: AWS::Serverless::Function
    Properties:
      Handler: index.handler
      Runtime: nodejs6.10
      CodeUri: ./
      Policies:
        - AWSLambdaBasicExecutionRole

  MyBucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: !Sub 'my-bucket-name-${Env}'

You should do that with all of your resources. This helps you to identify them, e.g. when using the AWS Console. As you can see here, I didn’t do it for the Lambda function as AWS is doing it automatically for me. But of course, you can add your naming strategy here as well.

In the second step, we create a small deploy script which will set a parameterized stack name. This is important as without a parameter we would just change and update the same stack again and again.

#!/usr/bin/env bash

LAMBDA_BUCKET="Your-S3-Bucket-Name"
# change this ENV variable depending on the environment you want to deploy
ENV="prd"
STACK_NAME="aws-lambda-cf-environments-${ENV}"

# now package the CloudFormation template, upload SAM artifacts to S3 and deploy it
aws cloudformation package --template-file cfn.yml --s3-bucket ${LAMBDA_BUCKET} --output-template-file cfn.packaged.yml
aws cloudformation deploy --template-file cfn.packaged.yml --stack-name ${STACK_NAME} --capabilities CAPABILITY_IAM --parameter-overrides Env=${ENV}

You can now try to deploy the script or enhance it, e.g. by reading the environment parameter from a script parameter. Whatever you do, make sure that you keep it easy and don’t exceed the maximum size for CloudFormation templates.

 

* Well, “every time” is not true. Things go wrong and so do software programs.