Photo by Luke Chesser on Unsplash

Provisioned concurrency was launched in December 2019 and its primary feature is to enable workloads which require highly consistent latency. It does this by provisioning a configurable amount of AWS Lambda execution environments ready for your function to use at any point. This is different to regular usage where your function will experience a cold start if its concurrency increases from its current value. This functionality and its performance benefits are presented in the AWS Blog Provisioned Concurrency for Lambda Functions.

In this blog post we’ll look at the lesser known benefit of provisioned concurrency, how it also saves you…

Photo by Sigmund on Unsplash

In this post I will cover what I consider the best practices for producing useful logs when deploying to AWS Lambda. I’ll explain how AWS Lambda Powertools, an Open-source client library, can help do a lot of the heavy lifting for you. This post is aimed at developers, DevOps engineers and anyone else involved in the production or operation of systems using Lambda.

I started my tech career as a build engineer; I guess we’d call this a junior Ops role now. It was my responsibility to build other people’s code, get it onto the servers and understand if it…

Photo by Markus Winkler on Unsplash

By default AWS Lambda creates a number of default metrics about your function. These include errors, throttles, duration, concurrency and many more. These are all metrics which can be produced about your function without knowing anything about what your function does. When you want to create your own custom metrics you have a few choices. In this post I will talk about the various ways you can integrate with Amazon CloudWatch including what I think is best practice for working with Lambda.

Firstly, why would you want to create your own custom metrics? In most cases there is information about…

Photo by <a href=”">Nubelson Fernandes</a> on <a href=”">Unsplash</a>
Photo by <a href=”">Nubelson Fernandes</a> on <a href=”">Unsplash</a>

One of the most common questions I’m asked as an AWS Solutions Architect is how do I avoid AWS Lambda cold starts, and should I use some mechanism for warming Lambda functions. This post is aimed at developers / DevOps engineers new to Lambda. Hopefully, by the end of it, you’ll understand the Lambda concurrency model and why warming functions are in virtually all cases not required, or of little real benefit.

The Lambda concurrency model is probably very different to what you’re used to if you’ve come from containers, VMs or bare metal. The latter handle multiple threads of…

At re:Invent 2020 a new packaging format for AWS Lambda functions was introduced; container images. With this functionality came a number of managed base images that customers can use too. They are patched and maintained by AWS and include functionality to simplify running your functions in a container image.

So what is that functionality? It’s an implementation of the Lambda runtime API. This isn’t just for containers, it’s just that with the introduction of this feature, the runtime API has a lot more visibility. You could always use this API to run other languages such as PHP or R.


Mark Sailes

AWS Specialist Solutions Architect for Serverless. Working to create the best Java developer experience on AWS Lambda.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store