Jun 23, 2025
AWS Lambda: A Deep Dive in AWS Resources & Best Practices to Adopt
0 min read
Share:
AWS Lambda was perhaps not the first serverless offering on the market, but it is one of the most popular offerings today - and it is what most people think about when they hear the term serverless.
In this blog post we will dive deep into what AWS Lambda is and how it works. We will see how you can provision and interact with Lambda functions using Terraform, and explore some of the best practices for the Lambda service.
What is AWS Lambda?
AWS Lambda is a function-as-a-service (FaaS) offering from AWS. The Lambda service allows you to run code (a function) in response to some type of event (e.g. a message received in an Amazon SQS queue). There is no server that you have to explicitly manage. This is why Lambda is called a serverless service.
You can write your Lambda function code using JavaScript/TypeScript, Python, Java, C#, Ruby, Rust, Go, PowerShell, and you can also implement your own custom runtime for other languages.
Lambda functions excel in event-driven architectures, where you can trigger different Lambda functions on different events. A few typical use cases for Lambda functions are:
Process logs that are ingested into an AWS Kinesis stream. You can enrich the logs with additional metadata, or mask sensitive values that appear in the logs before they are sent to a destination.
Create thumbnails of images uploaded to an Amazon S3 bucket.
Process messages sent to an Amazon SQS queue.
Trigger automation based on CloudWatch alerts to remediate an issue.
Lambda functions can also make up parts, or all, of your application backend logic. You can trigger Lambda functions based on HTTP requests. You could replace API endpoints in legacy applications with Lambda functions one by one using the strangler pattern. You place an AWS API Gateway in front of the application and switch each endpoint to go to an Lambda function instead of the legacy application.
AWS Lambda functions can be placed in a VPC. Be wary of overhead during the cold start when the function is connected to an elastic network interface of the VPC. A cold start is the time from when a request or event triggers a Lambda function before it is ready to handle the event. A cold start is normally very short but can be noticeable in some cases.
The pricing of Lambda functions can be difficult to calculate beforehand. It depends on the memory you give the function, the execution time, and how many times your function is invoked. You should also consider the cost of anything that the Lambda function interacts with (e.g. reading and writing messages in SQS, or storing data in a DynamoDB table).
Each AWS account has a free tier including 1,000,000 Lambda function invocations and 400,000 GB-seconds of compute time per month. The GB-second unit is simply for how long (in seconds) that you allocate an amount of memory. To make it easy: if your function could allocate 1 GB in memory and it would execute for 1 second, then you would have consumed 1 GB-second of compute time.
Speaking of memory, you configure your Lambda function with an amount of memory from 128 MB up to 10240 MB. A proportional amount of CPU is allocated with memory.
Generally, Lambda is a cheap service. However, once you reach a high steady rate of requests to your Lambda function you might be better off using some other type of compute service. The pricing for Lambda is hard to beat for sporadic invocations.
Managing AWS Lambda using Terraform
Managing AWS Lambda with Terraform is more than creating the executable itself. As with other AWS services, when you manage it using the AWS management console you get a lot for free under the hood. These are things to consider when working with Terraform.
In essence, to create a working AWS Lambda function you need at least the following:
The Lambda function.
The event source that should trigger the function.
An IAM role with permissions to perform the actions the function should do.
In the following we will go through two different use cases for a Lambda function:
Process messages that arrive in an SQS queue.
Respond to HTTP requests.
You can build complete cloud architectures from AWS Lambda functions together with different data processing and storage components. So these two examples only scratch the surface of what is possible.
Process messages from an SQS queue
Lambda functions process events. Events can be almost anything that takes place in your AWS environment or comes from the outside (e.g. an HTTP request). In this example the event is the arrival of a message on an SQS queue.
Events come in the form of a JSON payload to your function code. Each event has a different payload, so it's important to know what these payloads look like. You can find this information in the documentation.
An abbreviated example of an event from an SQS queue is the following:
I've purposefully removed a lot of the details from these messages to concentrate on the body property that our function will read. As you can see the event can contain multiple messages, so your code needs to be able to handle all messages.
A Lambda function written in Python that processes SQS messages could look like the following:
This function does the bare minimum: it prints each message it receives.
To provision the necessary resources using Terraform, start with the SQS queue. If you already have a queue you want to use, then you could replace the resource block with a data source block or alternatively import the queue resource into your Terraform state. Creating a new queue is easy:
The Lambda function requires an IAM role to be able to interact with the AWS environment (e.g. to read messages from the SQS queue). Create an IAM role and an assume role policy that allows the Lambda service to use the role:
Now Lambda has permission to assume the role. However, the role is still not allowed to do anything useful.
There is an AWS managed policy that is perfect for this role, it's called AWSLambdaSQSQueueExecutionRole
. This policy allows the role to interact with SQS and publish logs to CloudWatch.
Attach the policy to the role:
Moving on to the Lambda function now. We need to make the source code available in a convenient format, namely a zip file. You can either create the zip file through some other means before you run Terraform, or you can use the archive provider to create the zip file with Terraform.
Place the Python source code for the function in a directory named src and in a file named sqs.py. Then create the Lambda function using the aws_lambda_function
resource type:
We are almost done. We still have not connected the SQS queue with the function. So there is nothing that says that a message sent to the queue should trigger the function.
You configure this using the aws_lambda_event_source_mapping
resource type:
This is a complete example of a working Lambda function that processes messages from an SQS queue. To make the example more interesting you can extend the Lambda function source code to do something useful.
Respond to HTTP requests
A few years ago there was no direct way to directly trigger a Lambda function with an HTTP request. You needed to put something in-front of the Lambda function, e.g. an API Gateway resource. This changed when Lambda function URLs appeared.
Provisioning this type of Lambda function with Terraform more or less follows the same steps as in the previous example, with a few exceptions.
Create a new function source file in the src directory named http.py with the following content:
This function returns a static HTML page.
In this case the Lambda function does not necessarily need any permissions to do things in the AWS environment. Of course this depends on what you want your function to do, but the previous example had a strict requirement to be able to read the SQS queue resource.
It's a good practice to allow the function to log to CloudWatch, so at least give the role the AWSLambdaBasicExecutionRole
managed policy which gives these permissions.
Next, create the zip archive and the Lambda function:
Finally, instead of connecting an event source to trigger the Lambda function we will instead give the function a publicly available URL:
The authorization_type
is set to NONE, meaning anyone who has the function URL can invoke this function. You could also configure the AWS_IAM authentication type. That together with an IAM policy allows you to lock down who can invoke the function.
There is one more thing you can do with Lambda functions from Terraform: you can invoke the function.
This is a relatively new feature and builds upon ephemeral resources in newer versions of Terraform.
An example of invoking a function looks like this:
Note how this resource is defined using an ephemeral block, not a resource block.
A use case for Lambda invocations from Terraform could be if you want to fetch sensitive data that Terraform does not have direct access to. Instead you invoke the function that in turn fetches sensitive data from a source and returns it to Terraform. The ephemeral nature of this resource means Terraform does not store the response in your Terraform state file, so it is safe to use for this use case.
Best practices for AWS Lambda
Lambda functions consist of code, and there are a number of best practices around code management that you should consider. However, in the following list of best practices we focus mostly on Lambda-specific practices that are outside of the pure code realm.
Find the optimal amount of memory for your function
Assigning more memory to your function also means you allocate more CPU.
Finding the right amount of memory for your function should not be based on a simple guess. More is not always better in this case.
You can try to invoke your function with different amounts of memory and see how it behaves. Create a memory vs cost graph to see what the most cost effective memory is for your function. This could be a time-consuming effort and you will likely have to repeat the exercise regularly to review if there are any changes to the result.
You can automate this process using Terraform and your favorite scripting language, or you can use third-party tools that can help you find the optimal amount of memory for your functions.
Lambda function logging
As with any other type of application you should log important events from your Lambda functions.
Be wary of log costs as the number of invocations of your Lambda function increase. Each invocation comes with up to four boilerplate logs that say when the function was invoked, how much time was spent for this invocation, and more.
This means that if you have one million invocations, you could have up to four million log events even if you didn't create any custom logs yourself. Log ingestion to AWS CloudWatch can quickly become the most expensive part of your Lambda function.
Avoid placing dynamic code outside of the main function
If you are new to Lambda functions this will come as a surprise to you.
Code that is placed outside of the main function in your source code is invoked once per the lifetime of your Lambda function. Note that you could use other functions and invoke these functions from the main function, and this will work as expected.
This can have some surprising effects. For instance, imagine your function reads data from a database during startup. It would then read a snapshot of data at that time. Any new data added to the database will not be read as long as the Lambda function instance is kept alive.
The same is true if you set up connections to databases or other services outside of the main function (or outside of any function) then this connection will not be renewed as long as the Lambda function is active.
A better practice is to wrap most code in functions to make sure the results are fresh each time. Note that this could be a bit dependent on what language you write your Lambda code in. An example of a pseudo Python function where you would have this problem is this:
Use Lambda layers for common code
The Lambda service has a concept of layers that allow you to publish common code that multiple Lambda functions can use. When you create your Lambda function you select which layers should be included in your Lambda function.
In Terraform you do it like this:
A common use case for Lambda layers is to standardize code for a logging framework that you want all functions to use. This allows you to avoid recreating the same code in each function.
Sharing code in this way creates coupling between your functions and requires thorough testing when releasing new layer versions.
Use Lambda function versioning and aliases
You can version your Lambda functions. This allows you to publish a stable version (e.g. "v1"), and keep iterating on your function code for future versions. You can create an alias (e.g. "prod") that points at a specific version of your function. Once you have published and tested a new version you can update your alias to point to the new version.
You should set up a strategy for how you promote a version between different aliases (e.g. dev, to staging, to prod).
Use code signing
You should ideally make sure that a trusted party signs any code that runs on Lambda. This is an extra layer of security.
You can use AWS Signer which is a fully managed code signing service from AWS. An example of how to set this up using Terraform:
Use Lambda extensions to integrate with third-party systems
Lambda extensions allow you to extend the functionality of your Lambda functions. You can use them to integrate with third-party systems or write your own custom extensions to integrate with your own systems.
There are a number of managed extensions available from AWS (e.g. HashiCorp Vault, DataDog, Splunk, Fastly, etc). The benefit of using a managed extension is that you do not have to write the code to interact with the third-party system yourself. This saves you time and complexity.
Do not rely on ephemeral storage
A Lambda function has access to ephemeral storage that has the same lifecycle as the Lambda function itself. You can store data in ephemeral storage during function invocation.
However, you should not rely on this storage because it could disappear at any moment. Only store data that your function will read during the same invocation, but be wary that if the function crashes during execution that data might be gone before you have time to process it.
If you need persistent data storage you should use a different solution, such as a DynamoDB table or an S3 bucket.
Anyshift for AWS Lambda
It is common that you will set up many Lambda functions. A good practice is to use a function for one purpose. This allows you to build small functions that are easy to debug. A caveat with many functions is that it will be difficult to track how each function is configured.
Assuming you have configured the integrations in Anyshift with your code repository, AWS account, and Terraform state backends you can easily ask Annie the questions you need to answer. For instance, do I have any Lambda function with too broad IAM permissions?

As it turns out in this case, one of my functions has the AdministratorAccess
role attached. This is typically something that is added during development and then forgotten. Annie provides a clear answer to your question, recommendations for how you should configure the IAM role for the Lambda function, as well as remediation steps and where in your codebase you have configured this problematic IAM role:

You are able to get this type of detailed response to your questions because Anyshift creates a digital twin of your AWS environment, taking your Terraform state files and Terraform configurations in your git repositories into account. This gives Anyshift the context required to know how configuration, code, and behavior is connected. This contextual insight is difficult to obtain in other ways without carefully analyzing all the moving pieces by hand.
Visit the documentation to get started with Anyshift to help you understand your AWS and Terraform context better.
Conclusions
AWS Lambda is a powerful function-as-a-service offering. It is perhaps the service that is the most synonymous with serverless. With relatively small efforts you can create powerful event-driven cloud architectures and applications.
Terraform offers full support for managing every aspect of Lambda functions, including packaging the code and creating the function in the same terraform apply.
There are a number of best-practices to keep in mind when working with Lambda functions, among these are using code signing to verify the code comes from a trusted source, using versioning and aliases to safely deploy new versions of your Lambda functions, and optimizing the amount of memory you assign to your function for performance and cost management.