AWS API Gateway & Lambda with Terraform

Zubair Haque
6 min readFeb 22, 2023

--

Image from https://spacelift.io/blog/what-is-terraform

I attended a PI planning session with the product development team a short while ago. We were preparing for a new feature rollout, I was faced with the challenge of finding a reliable and cost-effective solution to build and deploy our R&D project. After researching our options, we ultimately chose to use AWS SAM (Serverless Application Model). In this blog post, I will go over how to define, provision, and manage your infrastructure resources with Terraform. This should help you setup your AWS SAM environment and get you on your way to establishing your development process.

Using Terraform

I’ll go over how easy it is to manage & deploy AWS resources such as API Gateway, S3 and Lambda functions. Pointing out key things like:

  1. Using Declarative Infrastructure as Code (IaC): Terraform uses a declarative syntax to describe the desired state of your infrastructure.
  2. Resource management with Terraform: It allows you to create, modify, and delete AWS resources. It also supports versioning, which makes it easy if you want to do a roll back, going back to a previous configuration.
  3. State management: Terraform simplifies infrastructure management by using configuration files to define the desired state of your infrastructure and automatically making changes to bring it in line with that state, ensuring consistent and predictable configuration.

Creating a Lambda Function

We’re going to start by creating a basic NodeJs lambda function, create a directory called handler & in that directory create a file called handler.js

'use strict';

export async function handler(event) {
return {
statusCode: 200,
headers: {
'Access-Control-Allow-Origin': '*'
},
body: JSON.stringify(
{
message: 'Your function executed successfully!'
},
null,
2
),
};
}

This handler function I created above, basically returns a JSON object with a message and status code of 200 when it is successfully invoked and executed.

Terraform

We’re going to start on writing our terraform code, create a directory called terraform & in that directory create a provider.tf file. This file declares version constraints for the various different providers we will use, such as the AWS provider the version of the Terraform CLI we’re going to use & what region we’re going to create our S3 buckets and Lambda functions in:

terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.21.0"
}
random = {
source = "hashicorp/random"
version = "~> 3.3.0"
}
archive = {
source = "hashicorp/archive"
version = "~> 2.2.0"
}
}

required_version = "~> 1.0"
}

provider "aws" {
region = "us-east-2"
}

The next thing we’re going to do is, we’re going to create a zip archive of our Lambda function and upload it to S3. The reason we’re doing that is, our Lambda functions are implemented as packages of code, then those packages are uploaded to AWS. Example below in the handler-lambda-bucket.tf file:

resource "random_pet" "lambda_bucket_name" {
prefix = "lambda"
length = 2
}

resource "aws_s3_bucket" "lambda_bucket" {
bucket = random_pet.lambda_bucket_name.id
force_destroy = true
}

resource "aws_s3_bucket_public_access_block" "lambda_bucket" {
bucket = aws_s3_bucket.lambda_bucket.id

block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}

So now we have our new S3 bucket created. I want to take a moment to go over the aws_s3_bucket_public_access_block which is what we used to set up our permissions for our AWS S3 bucket. This particular Terraform resource applies access restrictions using the following parameters:

(Note: this helped us ensure that the S3 bucket is not publicly accessible)

  • block_public_acls
  • block_public_policy
  • ignore_public_acls
  • restrict_public_buckets

keep in mind that all access to the bucket is controlled through AWS Identity and Access Management (IAM) policies, which can always be adjusted. Ok, so moving on to the next thing which is going to be creating a Lambda function that runs in response to an event trigger. The example is below which is created in thehandler-lambda.tf file:

resource "aws_iam_role" "handler_lambda_exec" {
name = "handler-lambda"

assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}

resource "aws_iam_role_policy_attachment" "handler_lambda_policy" {
role = aws_iam_role.handler_lambda_exec.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
}

resource "aws_lambda_function" "handler" {
function_name = "handler"

s3_bucket = aws_s3_bucket.lambda_bucket.id
s3_key = aws_s3_object.handler.key

runtime = "nodejs16.x"
handler = "function.handler"

source_code_hash = data.archive_file.handler.output_base64sha256

role = aws_iam_role.handler_lambda_exec.arn
}


resource "aws_cloudwatch_log_group" "handler_lambda" {
name = "/aws/lambda/${aws_lambda_function.handler.function_name}"
}

data "archive_file" "handler" {
type = "zip"
source_dir = "../${path.module}/handler"
output_path = "../${path.module}/handler.zip"
}

resource "aws_s3_object" "handler" {
bucket = aws_s3_bucket.lambda_bucket.id
key = "handler.zip"
source = data.archive_file.handler.output_path
etag = filemd5(data.archive_file.handler.output_path)
}
  1. aws_iam_role:We start off by creating an AWS IAM role for the Lambda function, which has a name and a policy. The policy we set in this configuration is the basic execution role for our Lambda function.
  2. aws_iam_role_policy_attachment:We also have a policy attachment for the AWS IAM role we created, this policy grants permissions that are necessary for the Lambda function to run, such as writing logs to our CloudWatch Logs.
  3. aws_lambda_function:The aws_lambda_functionresource will create a Lambda function named “hello” with a specified runtime of NodeJS, it’s going to use the IAM role created above as the execution role. An important detail I wanted to point out is, on line 34 you’ll see there is a source_code_hashwhich is going to be the hash of the zip file that will be uploaded to the S3 bucket, terraform can detect if the zip file has changed, and if it has, it will update the Lambda function. Also, last but not least, we have the arn role on line 36 that will be used to execute the Lambda function when it is invoked.
  4. aws_cloudwatch_log_group: This Terraform resource creates a CloudWatch Logs group for our Lambda function.
  5. data_archive_file: This is the zip file that will be uploaded to our S3 bucket created above and our Lambda function has the permissions to access it.
  6. aws_s3_object: Uploads a zip archive of our source code.

AWS API Gateway

The next step is to create an AWS API Gateway using Terraform. We’re going to use API Gateway version 2, which allows you to create and manage RESTful APIs. Create a file called api-gateway.tfand add the following:

resource "aws_apigatewayv2_api" "main" {
name = "main"
protocol_type = "HTTP"
}

resource "aws_apigatewayv2_stage" "dev" {
api_id = aws_apigatewayv2_api.main.id

name = "dev"
auto_deploy = true

access_log_settings {
destination_arn = aws_cloudwatch_log_group.main_api_gw.arn

format = jsonencode({
requestId = "$context.requestId"
sourceIp = "$context.identity.sourceIp"
requestTime = "$context.requestTime"
protocol = "$context.protocol"
httpMethod = "$context.httpMethod"
resourcePath = "$context.resourcePath"
routeKey = "$context.routeKey"
status = "$context.status"
responseLength = "$context.responseLength"
integrationErrorMessage = "$context.integrationErrorMessage"
}
)
}
}

resource "aws_cloudwatch_log_group" "main_api_gw" {
name = "/aws/api-gw/${aws_apigatewayv2_api.main.name}"

retention_in_days = 30
}

So let’s take a look at this plan, as mentioned above it is sets up a basic infrastructure for a v2 API Gateway. You can add more configuration options and integrations with other AWS services, if needed but I am going to keep it pretty basic for this demo:

  • aws_apigatewayv2_api: The name of our API is called main, which I will show you on the AWS console once we apply our configuration.
  • aws_apigatewayv2_stage: This Terraform resource represents the deployment for the API Gateway v2 API, which we have set to dev.
  • In API Gateway V2,
  • aws_cloudwatch_log_group:The stage is set to auto-deploy and has an access log setting that writes to a CloudWatch log group called “main_api_gw”. The CloudWatch log group has a retention period of 30 days.

Now, let’s write the Terraform for creating the API Gateway v2 route that invokes our Lambda function. The code will use the following three resources:

resource "aws_apigatewayv2_integration" "lambda_handler" {
api_id = aws_apigatewayv2_api.main.id

integration_type = "AWS_PROXY"
integration_uri = aws_lambda_function.handler.invoke_arn
}

resource "aws_apigatewayv2_route" "post_handler" {
api_id = aws_apigatewayv2_api.main.id
route_key = "POST /handler"

target = "integrations/${aws_apigatewayv2_integration.lambda_handler.id}"
}

resource "aws_lambda_permission" "api_gw" {
statement_id = "AllowExecutionFromAPIGateway"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.handler.function_name
principal = "apigateway.amazonaws.com"

source_arn = "${aws_apigatewayv2_api.main.execution_arn}/*/*"
}
  1. aws_apigatewayv2_integration: resource specifies the integration between the API Gateway and the Lambda function.
  2. aws_apigatewayv2_route: resource defines the route for the API Gateway. The route_key specifies the HTTP method and path for the route, so this is a POST request and the endpoint is handler.
  3. aws_lambda_permission: resource allows the API Gateway to invoke the Lambda function.

Before actually creating these resources, it’s a good practice to preview the changes that Terraform will make by generating a plan file.

$ terraform plan

The output of the terraform plan command will include information such as:

  • The resources that Terraform will create, modify, or delete.
  • Any changes to resource attributes or dependencies.
  • Any errors or warnings that Terraform encounters while generating the plan.

Once you have reviewed and approved the plan, you can run the terraform apply command to actually execute the plan and create the aws resources you defined in your code. To see your resources in the console, you’ll need to log in to your AWS account and navigate to the relevant modules:

  1. API Gateway: you should be able to the see the main API.
  2. AWS Lambda: you should be able to see the handler Lambda function created above.

To view an example of this code, you can clone and share my repo on Github.

--

--

Zubair Haque
Zubair Haque

Written by Zubair Haque

The Engineering Chronicles: I specialize in Automated Deployments

No responses yet