Since its inception, Amazon Web Services (AWS) has been the front runner provider in the cloud-computing landscape, and the on-demand nature of cloud computing is epitomized in a service like AWS Lambda. AWS Lambda is a serverless technology that provides on-demand computing for you to run your code as functions in the AWS cloud environment.
Provisioning and configuring cloud infrastructure has been optimized through the use of infrastructure as code (IaC) tools like Terraform, an open-source tool that creates infrastructure in cloud environments like AWS. Terraform makes creating resources, like AWS Lambda functions, more efficient, predictable, and reliable.
The cloud and automation technology models offered by AWS and Terraform, respectively, complement each other well. Modern DevOps practices have incorporated these two paradigms to streamline the process of delivering software automatically and offering scalable solutions for end users.
In this post, you will learn what serverless architecture is, the benefits of using Terraform, and how to create AWS Lambda functions using Terraform.
What Is Serverless Architecture?
Serverless architecture doesn’t mean that there is no underlying infrastructure for software applications to run on. Instead, it means that software developers don’t have to provision, manage, or maintain the relevant servers for their applications to run on. Rather, infrastructure management is taken care of by a compute-service cloud provider like AWS.
In a serverless model, AWS assumes full management of the underlying infrastructure to take away any management overhead. All that software teams have to worry about is their application.
Characteristics and Benefits of AWS Lambda
AWS Lambda is a serverless compute service that runs code without having to provision or manage the underlying infrastructure for the code.
Let’s take a quick look at some additional characteristics of the AWS Lambda service.
- Event-driven. Lambda functions can be triggered based on certain events as configured by the software developers, a cost-effective solution.
- Scaling capabilities. Lambda rapidly creates as many copies of the function as needed based on demand.
- Language support. AWS Lambda has native support for a number of programming languages (Node.js, Java, C#, Python, Ruby, PowerShell, and Go), and it also provides a Runtime API that enables users to use any other programming language.
Characteristics and Benefits of Terraform
Using Terraform to provision your infrastructure has a number of benefits, but we’ll cover a few before diving into the tutorial.
- Declarative model. Terraform makes it easy for teams to create resources by allowing you to define the desired outcome and then automating the relevant intermediary steps by making API calls under the hood for infrastructure creation.
- Efficiency and reliability. Terraform runs underlying API requests in the correct sequence to execute the desired state, with the ability to specify prerequisite dependencies for resources.
- Repeatability. Terraform's declarative model to optimize infrastructure resources makes the entire provisioning lifecycle repeatable for software teams with IaC access.
- Versioning. IaC enables software teams to practice versioning as they typically would, in line with best practices of software development. Teams can maintain snapshots of versions as infrastructure architecture changes, and perform rollbacks if necessary.
- Documentation. Terraform in particular has a simplistic, readable style when declaring resources. As a result, code becomes a form of documentation that enables various stakeholders in teams to understand the entire infrastructure landscape.
Use Cases for AWS Lambda
Since AWS Lambda functions are stateless and event-driven, they’re particularly versatile for a number of scenarios. Common use cases in the context of AWS environments include:
- Changes to S3 buckets
- DynamoDB tables
- CloudWatch Events as triggers for Lambda functions
- Serverless backends with API Gateway
Implementing AWS Lambda using Terraform
In this section, you will create and deploy a Lambda function using Terraform. The Lambda function will be fronted by an API Gateway resource to expose the function as an API. Once those resources are ready, you will create a React.js application to make client requests to your Lambda function.
All the source code for the following tutorial can be found here.
The AWS Lambda function will receive an event status based on the selection of a user from a website and return a GIF that matches the selected status.
Prerequisites
First, make sure you have the following prerequisites in place:
- An AWS account
- An AWS profile configured with the AWS CLI on your local machine
- Terraform installed
- Node.js and npm installed
Folder Structure
Create a directory called iac
where all your Terraform source code will reside. Inside this directory, create the following files and folders with a similar structure:
├── lambda
| ├── api_gateway.tf
| ├── function_src
| ├── iam.tf
| ├── lambda.tf
| └── vars.tf
├── main.tf
├── provider.tf
├── sensitive.tfvars
└── variables.tf
Create S3 Bucket for Terraform State
To keep track of your infrastructure changes, Terraform stores the details of your live or deployed infrastructure configurations in a JSON file known as a state file. This file can either be persisted locally in your project directory or remotely (known as a backend state).
The latter is the more optimal solution, with the state file being stored in a cloud storage solution like an S3 bucket. Whenever you desire to execute changes to your infrastructure for a particular project, Terraform will reference the relevant state file and update it accordingly when your changes are live.
To configure remote backend state for your infrastructure, create an S3 bucket.
aws s3api create-bucket --bucket <bucket-name> --region <region> --create-bucket-configuration LocationConstraint=<region>
Configure Terraform Provider and Backend State
Here, you’ll populate the provider.tf
file with cloud provider details and the remote backend state configurations. The resources you are creating will be in the AWS environment so the provider details will be set up as such.
provider "aws" {
region = var.region
profile = var.profile
}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.0"
}
}
backend "s3" {
bucket = "your-bucket-name"
key = "arbitrary-key-for-state-file/terraform.tfstate"
region = "eu-west-1"
encrypt = true
}
}
Declare the var.region
and var.profile
variables in the variables.tf
file in the root directory, and populate them in the sensitive.tfvars
file. In addition to these, create a variable for your AWS Account ID which will be used in a later section.
Populate variables.tf
These variables are stored in the root location of the project because they have a global context for the Terraform source code.
variable "profile" {
description = "AWS profile"
type = string
}
variable "region" {
description = "AWS region to deploy to"
type = string
}
variable "accountId" {
description = "AWS Account ID"
type = number
}
Populate sensitive.tfvars
This file populates the global variables with values that may be considered sensitive, hence the file name. This file should never be committed to source control.
profile="your-aws-profile" # your AWS profile configured with the CLI on your workstation
region="eu-west-1" # your desired aws region
accountId=12345678910 # your AWS account number
Create AWS Lambda Function and API Gateway
Here, you’ll create an AWS Lambda resource, its function, and an API Gateway integration to expose your function as an API.
Create Node.js Function for Lambda
Create a basic JavaScript function for a Node.js runtime environment in your Lambda. As mentioned above, this function will receive an event with a particular status and generate a response with a particular GIF based on the status. The different statuses that the Lambda function will respond to are:
- STARTED
- SUCCEEDED
- FAILED
- CANCELED
Inside the function_src
folder, create an index.js
file with the code below.
const STATUS_GIF = {
started: 'https://media.giphy.com/media/tXLpxypfSXvUc/giphy.gif', // rocket launching
succeeded: 'https://media.giphy.com/media/MYDMiSizWs5sjJRHFA/giphy.gif', // michael jordan celebrating
failed: 'https://media.giphy.com/media/d2lcHJTG5Tscg/giphy.gif', // anthony anderson crying
canceled: 'https://media.giphy.com/media/IzXmRTmKd0if6/giphy.gif', // finger pressing abort button
}
module.exports.handler = (event, context, callback) => {
try {
let gif_url;
const requestedState = JSON.parse(event.body);
const state = requestedState.state;
console.log("state:", state);
switch (state) {
case 'STARTED':
gif_url = STATUS_GIF.started
break;
case 'SUCCEEDED':
gif_url = STATUS_GIF.succeeded
break;
case 'FAILED':
gif_url = STATUS_GIF.failed
break;
case 'CANCELED':
gif_url = STATUS_GIF.canceled
break;
default:
gif_url = STATUS_GIF.started
break;
}
const response = {
statusCode: 200,
headers: {
'Content-Type': 'application/json',
"Access-Control-Allow-Origin" : "*"
},
body: JSON.stringify({ "url": gif_url }),
}
callback(null, response)
}catch(err){
console.log("error:", err);
return "Something went wrong"
}
}
Populate lambda.tf
Declare the relevant Lambda resources, such as the function, its reference to the JavaScript source code created in the previous section, and an integration with API Gateway for invocation. Create a resource granting permissions for your Lambda to be triggered or invoked by API Gateway.
data "archive_file" "lambda_zip" {
type = "zip"
source_file = "lambda/function_src/index.js"
output_path = "lambda/function_src/index.zip"
}
resource "aws_lambda_function" "main" {
filename = data.archive_file.lambda_zip.output_path
function_name = var.function_name
role = aws_iam_role.iam_for_lambda.arn
handler = var.handler
source_code_hash = data.archive_file.lambda_zip.output_base64sha256
runtime = var.runtime
}
resource "aws_lambda_permission" "apigw_lambda" {
statement_id = "AllowExecutionFromAPIGateway"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.main.function_name
principal = "apigateway.amazonaws.com"
source_arn = "arn:aws:execute-api:${var.region}:${var.accountId}:${aws_api_gateway_rest_api.api.id}/*/${aws_api_gateway_method.method.http_method}${aws_api_gateway_resource.resource.path}"
}
Populate api_gateway.tf
Declare the resources required to create an API Gateway resource with a particular stage. API Gateway will proxy any incoming requests to the Lambda function.
resource "aws_api_gateway_rest_api" "api" {
name = "gif-api"
}
resource "aws_api_gateway_resource" "resource" {
path_part = "resource"
parent_id = aws_api_gateway_rest_api.api.root_resource_id
rest_api_id = aws_api_gateway_rest_api.api.id
}
resource "aws_api_gateway_method" "method" {
rest_api_id = aws_api_gateway_rest_api.api.id
resource_id = aws_api_gateway_resource.resource.id
http_method = "ANY"
authorization = "NONE"
}
resource "aws_api_gateway_integration" "integration" {
rest_api_id = aws_api_gateway_rest_api.api.id
resource_id = aws_api_gateway_resource.resource.id
http_method = aws_api_gateway_method.method.http_method
integration_http_method = "POST"
type = "AWS_PROXY"
uri = aws_lambda_function.main.invoke_arn
}
resource "aws_api_gateway_deployment" "main" {
rest_api_id = aws_api_gateway_rest_api.api.id
stage_name = var.environment
depends_on = [aws_api_gateway_integration.integration]
variables = {
# just to trigger redeploy on resource changes
resources = join(", ", [aws_api_gateway_resource.resource.id])
# note: redeployment might be required with other gateway changes.
# when necessary run `terraform taint <this resource's address>`
}
lifecycle {
create_before_destroy = true
}
}
Populate iam.tf
Declare the permissions for your Lambda resource by creating a service role that has a trust relationship with AWS Lambda. Also, create a policy that outlines the permissions that your resource will have.
You’ll notice that the permissions will enable your Lambda to push logs to CloudWatch. This will be especially helpful to provide visibility and insight into the occurrences of your function.
resource "aws_iam_role" "iam_for_lambda" {
name = var.lambda_role
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF
}
resource "aws_iam_role_policy" "iam_for_lambda" {
name = var.lambda_policy
role = aws_iam_role.iam_for_lambda.id
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:DescribeLogGroups",
"logs:PutLogEvents",
"xray:Put*"
],
"Resource": "*"
},
{
"Effect":"Allow",
"Action": [
"cloudwatch:*",
"iam:PassRole"
],
"Resource": "*"
}
]
}
EOF
}
Populate vars.tf
Add all the relevant variable declarations for the above Lambda resources. Your variables will be populated by default values and at the module level in the following section.
variable function_name {
description = "The function name"
type = string
}
variable handler {
description = "The handler name"
type = string
default = "index.handler"
}
variable runtime {
description = "The runtime environment for the Lambda function"
type = string
default = "nodejs12.x"
}
variable lambda_role {
description = "The name of the iam role"
type = string
}
variable lambda_policy {
description = "The name of the policy"
type = string
}
variable "region" {
description = "AWS region to deploy to"
type = string
}
variable "accountId" {
description = "AWS Account ID"
type = number
}
variable "environment" {
description = "The application environment"
type = string
}
Create Terraform Module for AWS Lambda and API Gateway Resources
Create a module for the resources declared in the lambda
folder. A module is a logical grouping of related resources in Terraform; it can have a local source or a remote source (i.e., Git repository). In this case, your module source is a local directory that will be referenced as such.
Populate main.tf
module "lambda_for_gifs" {
source = "./lambda"
function_name = "lambda-display-gif"
lambda_role = "lambda-for-gif-role"
lambda_policy = "lambda-for-gif-policy"
environment = "dev"
region = var.region
accountId = var.accountId
}
Create Infrastructure with Terraform
Here, you’ll provision the declared infrastructure in your AWS account.
Navigate to the root level of your project and initialize the project with the terraform init
command. This will set up and configure your backend state, as well as download the API plugins for your cloud provider (AWS) for the execution commands you will run.
Create Execution Plan
To create an execution plan, run the terraform plan -var-file="sensitive.tfvars"
command. This will print out a plan of all the infrastructure changes that will be carried out based on your declarations.
Apply Execution Plan
To apply the execution plan that was printed, run the terraform apply -var-file="sensitive.tfvars"
command. Terraform will run the relevant API requests to AWS in order to create the infrastructure.
When the resources have successfully been created, you can head over to the AWS Console and view the created Lambda.
Retrieve the API endpoint URL for your Lambda function's API from the API Gateway service. This can specifically be found under the Stages section.
Test Lambda Function with React Web Application
Test your Lambda resource by selecting a status that will be submitted to the exposed endpoint that will invoke your function.
Clone the repository shared above and navigate to the client directory in your terminal. Install the dependencies for the React application with the npm install
command.
Update lambdaEndpoint in App.js
In the App.js
file, notice a function called formHandler
. Update the const lambdaEndpoint
value with the API endpoint you retrieved in the previous section.
const formHandler = async () => {
try {
const lambdaEndpoint = "https://<unqiue-value>.execute-api.<region>.amazonaws.com/dev/resource";
const response = await fetch(lambdaEndpoint, {
method: 'POST',
headers: {
Accept: "application/json"
},
body: JSON.stringify({ "state": status })
});
const responseUrl = await response.json();
setGifUrl(responseUrl.url)
} catch (err) {
console.log(err);
}
};
Start React Application
The last step is to start up your application with the npm start
command. You will be presented with a form and different status options that you can select. Each option will return a specific GIF from the Lambda function.
What About WebAssembly?
WebAssembly is a small, fast, and secure stack-based Virtual Machine that is infrastructure-agnostic. It’s an excellent alternative to the container technology that powers AWS Lambda.
Two of WebAssembly’s key strengths are its portability and speed. It has a faster startup time than containers, and it’s able to run on devices with small amounts of memory and storage. Another of its strengths is the security model it offers. Each of its modules runs within a sandboxed environment separated from the host runtime based on fault isolation techniques.
Conclusion
AWS Lambda offers a great on-demand model for executing pieces of code based on certain events. You should keep an eye on WebAssembly as an up and coming alternative with a very secure execution model that offers close to native performance for isolated functions.
If you're looking for a platform to easily build out WebAssembly-powered web services, Suborbital can help. They’ll allow you to create production-grade WebAssembly services quickly and easily.
Cover photo by CHUTTERSNAP on Unsplash