A developer toolkit to implement Serverless best practices and increase developer velocity.
This patch release fixes a bug on the event handler utility, where using the compress option was causing an error.
Huge thanks to @dacianf for reporting this!
@github-actions, @github-actions[bot] and @rubenfonseca
This release adds support for Data Validation and automatic OpenAPI generation in Event Handler.
Even better, it works with your existing resolver (API Gateway REST/HTTP, ALB, Lambda Function URL, VPC Lattice)!
Did you read that correctly? Yes, you did! Look at this:
Docs: Data validation
By adding enable_validation=True
to your resolver constructor, youβll change the way the resolver works. We will:
This moves data validation responsibilities to Event Handler resolvers, reducing a ton of boilerplate code. You can now focus on just writing your business logic, and leave the validation to us!
from typing import List, Optional
import requests
from pydantic import BaseModel, Field
from aws_lambda_powertools import Logger, Tracer
from aws_lambda_powertools.event_handler import APIGatewayRestResolver
from aws_lambda_powertools.logging import correlation_paths
from aws_lambda_powertools.utilities.typing import LambdaContext
tracer = Tracer()
logger = Logger()
app = APIGatewayRestResolver(enable_validation=True)
class Todo(BaseModel):
userId: int
id_: Optional[int] = Field(alias="id", default=None)
title: str
completed: bool
@app.post("/todos")
def create_todo(todo: Todo) -> str:
response = requests.post("https://jsonplaceholder.typicode.com/todos", json=todo.dict(by_alias=True))
response.raise_for_status()
return response.json()["id"]
@app.get("/todos")
@tracer.capture_method
def get_todos() -> List[Todo]:
todo = requests.get("https://jsonplaceholder.typicode.com/todos")
todo.raise_for_status()
return todo.json()
@logger.inject_lambda_context(correlation_id_path=correlation_paths.API_GATEWAY_HTTP)
@tracer.capture_lambda_handler
def lambda_handler(event: dict, context: LambdaContext) -> dict:
return app.resolve(event, context)
Docs: OpenAPI generation
When you enable data validation, we automatically inspect your API in a way that makes it possible to generate OpenAPI specifications automatically!
You can export the OpenAPI spec for customization, manipulation, merging micro-functions, etc., in two ways:
app.get_openapi_schema()
app.get_openapi_json_schema()
Hereβs one way to print the schema if you were to run your Python Lambda handler locally:
import requests
from aws_lambda_powertools.event_handler import APIGatewayRestResolver
from aws_lambda_powertools.event_handler.openapi.models import Contact, Server
from aws_lambda_powertools.utilities.typing import LambdaContext
app = APIGatewayRestResolver(enable_validation=True)
@app.get("/todos/<todo_id>")
def get_todo_title(todo_id: int) -> str:
todo = requests.get(f"https://jsonplaceholder.typicode.com/todos/{todo_id}")
todo.raise_for_status()
return todo.json()["title"]
def lambda_handler(event: dict, context: LambdaContext) -> dict:
return app.resolve(event, context)
if __name__ == "__main__":
print(
app.get_openapi_json_schema(
title="TODO's API",
version="1.21.3",
summary="API to manage TODOs",
description="This API implements all the CRUD operations for the TODO app",
tags=["todos"],
servers=[Server(url="https://stg.example.org/orders", description="Staging server")],
contact=Contact(name="John Smith", email="[email protected]"),
),
)
Can you see where this is going? Keep reading :)
Docs: Swagger UI
Last but not least... you can now enable an embedded Swagger UI to visualize and interact with your newly auto-documented API!
from typing import List, Optional
import requests
from pydantic import BaseModel, Field
from aws_lambda_powertools import Logger, Tracer
from aws_lambda_powertools.event_handler import APIGatewayRestResolver
from aws_lambda_powertools.logging import correlation_paths
from aws_lambda_powertools.utilities.typing import LambdaContext
app = APIGatewayRestResolver(enable_validation=True)
app.enable_swagger() # by default, path="/swagger"
@app.get("/todos")
@tracer.capture_method
def get_todos() -> List[Todo]:
todo = requests.get("https://jsonplaceholder.typicode.com/todos")
todo.raise_for_status()
return todo.json()
def lambda_handler(event: dict, context: LambdaContext) -> dict:
return app.resolve(event, context)
The Swagger UI appears by default at the /swagger
path, but you can customize this to serve the documentation from another path, and specify the source for Swagger UI assets.
We canβt wait for you try this new features!
@dependabot, @dependabot[bot], @github-actions, @github-actions[bot], @heitorlessa, and @rubenfonseca
This patch release addresses the following issues:
Logger.addFilter/removeFilter
*/*
) binary types when CORS is configured@logger.inject_lambda_context("powertools_json(body).my_field")
π Big thanks to @rafrafek and @martinber for their critical eye in spotting some of these issues
2c57e4d
to fc42bac
in /docs (#3375) by @dependabotf486dc9
to 2c57e4d
in /docs (#3366) by @dependabot2c57e4d
to fc42bac
in /docs (#3375) by @dependabotf486dc9
to 2c57e4d
in /docs (#3366) by @dependabot@dependabot, @dependabot[bot], @github-actions, @github-actions[bot], @heitorlessa and @leandrodamascena
This minor release adds support for two new environments variables to configure the log level in Logger.
You can now configure the log level of for Logger using two new environment variables: AWS_LAMBDA_LOG_LEVEL
and POWERTOOLS_LOG_LEVEL
.
The new environment variables will work along the existing LOG_LEVEL
variable that is now considered legacy and will be removed in the future.
Setting the log level now follows this order:
AWS_LAMBDA_LOG_LEVEL
environment variablelevel
constructor option, or by calling the logger.setLevel()
methodPOWERTOOLS_LOG_LEVEL
environment variable@dependabot, @dependabot[bot], @github-actions, @github-actions[bot] and @leandrodamascena
This patch release fixes a regression when using prefix stripping with middlewares on the event handler. It also fixes a mistyped field on the Kinesis Firehose event source, and a problem when getting multiple encrypted SSM parameters.
Huge thanks to @roger-zhangg and @sean-hernon for helping us identifying and fixing these issues.
772e14e
to f486dc9
in /docs (#3299) by @dependabotdf9409b
to 772e14e
in /docs (#3265) by @dependabotcb38dc2
to df9409b
in /docs (#3216) by @dependabot772e14e
to f486dc9
in /docs (#3299) by @dependabotdf9409b
to 772e14e
in /docs (#3265) by @dependabotcb38dc2
to df9409b
in /docs (#3216) by @dependabot@dependabot, @dependabot[bot], @github-actions, @github-actions[bot], @jvnsg, @leandrodamascena, @roger-zhangg, @rubenfonseca and @sean-hernon
This release adds richer exception details to the logger utility, support for VPC Lattice Payload V2, smarter model inference in the parser utility, expanded ARM64 Lambda Layer support on additional regions, and fixes some bugs!
β Huge thanks to our new contributors: @Tom01098, @stevrobu, and @pgrzesik!
Docs: logger
The logger utility now logs exceptions in a structured format to simplify debugging. Previously, exception tracebacks appeared as a single string containing the raw stack trace frames. Developers had to parse each frame manually to extract file names, line numbers, function names, etc.
With the new serialize_stacktrace
flag, the logger prints stack traces as structured JSON. This clearly surfaces exception details like filenames, lines, functions, and statements per frame. The structured output eliminates the need to parse traceback strings, improving observability and accelerating root cause analysis.
Docs: event handler, parser
Amazon VPC Lattice is a fully managed application networking service that you use to connect, secure, and monitor the services for your application across multiple accounts and virtual private clouds (VPC). You can register your Lambda functions as targets with a VPC Lattice target group, and configure a listener rule to forward requests to the target group for your Lambda function.
With this seamless integration, you can now leverage the performance benefits of Amazon VPC Lattice Payload V2 directly in your event handlers. The latest release enables handling Lattice events using the familiar event handler API you already know, including critical features like CORS support and response serialization.
Docs: parser
The event_parser
decorator previously required you to duplicate the type when using type hints. Now, the event_parser
decorator can infer the event type directly from your handler signature. This avoids having to redeclare the type in the event_parser
decorator.
a4cfa88
to cb38dc2
in /docs (#3189) by @dependabotcbfecae
to a4cfa88
in /docs (#3175) by @dependabote5f28aa
to cbfecae
in /docs (#3157) by @dependabot06673a1
to e5f28aa
in /docs (#3134) by @dependabotb41ba6d
to 06673a1
in /docs (#3124) by @dependabota4cfa88
to cb38dc2
in /docs (#3189) by @dependabotcbfecae
to a4cfa88
in /docs (#3175) by @dependabote5f28aa
to cbfecae
in /docs (#3157) by @dependabot06673a1
to e5f28aa
in /docs (#3134) by @dependabotb41ba6d
to 06673a1
in /docs (#3124) by @dependabot@Tom01098, @dependabot, @dependabot[bot], @github-actions, @github-actions[bot], @heitorlessa, @leandrodamascena, @rubenfonseca, @pgrzesik, @roger-zhangg, @seshubaws, @stephenbawks and @stevrobu
This is a patch release to address a bug in @metrics.log_metrics
decorator to support functions with arbitrary arguments/kwargs, and a minor typing fix in Logger for explicit None
return types.
π Huge thanks to two new contributors who reported and fixed both bugs @FollowTheProcess and @thegeorgeliu
4ff781e
to b41ba6d
in /docs (#3117) by @dependabotc4890ab
to 4ff781e
in /docs (#3110) by @dependabot4ff781e
to b41ba6d
in /docs (#3117) by @dependabotc4890ab
to 4ff781e
in /docs (#3110) by @dependabot@FollowTheProcess, @dependabot, @dependabot[bot], @github-actions, @github-actions[bot] and @thegeorgeliu
This release simplifies data transformation with Amazon Kinesis Data Firehose, and handling secret rotation events from Amazon Secrets Manager.
π Huge welcome to our new contributor @TonySherman. Tony documented how to use Event Handler with micro Lambda functions.
When using Kinesis Firehose, you can use a Lambda function to perform data transformation. For each transformed record, you can choose to either:
To make this process easier, you can now use KinesisFirehoseDataTransformationResponse
and serialization functions to quickly encode payloads into base64 data for the stream.
Example where you might want to drop unwanted records from the stream.
from json import JSONDecodeError
from typing import Dict
from aws_lambda_powertools.utilities.data_classes import (
KinesisFirehoseDataTransformationRecord,
KinesisFirehoseDataTransformationResponse,
KinesisFirehoseEvent,
event_source,
)
from aws_lambda_powertools.utilities.serialization import base64_from_json
from aws_lambda_powertools.utilities.typing import LambdaContext
@event_source(data_class=KinesisFirehoseEvent)
def lambda_handler(event: KinesisFirehoseEvent, context: LambdaContext):
result = KinesisFirehoseDataTransformationResponse()
for record in event.records:
try:
payload: Dict = record.data_as_json # decodes and deserialize base64 JSON string
## generate data to return
transformed_data = {"tool_used": "powertools_dataclass", "original_payload": payload}
processed_record = KinesisFirehoseDataTransformationRecord(
record_id=record.record_id,
data=base64_from_json(transformed_data),
)
except JSONDecodeError:
# our producers ingest JSON payloads only; drop malformed records from the stream
processed_record = KinesisFirehoseDataTransformationRecord(
record_id=record.record_id,
data=record.data,
result="Dropped",
)
result.add_record(processed_record)
# return transformed records
return result.asdict()
When rotating secrets with Secrets Manager, it invokes your Lambda function in four potential steps:
createSecret
. Create a new version of the secret.setSecret
. Change the credentials in the database or service.testSecret
. Test the new secret version.finishSecret
. Finish the rotation.You can now use SecretsManagerEvent
to more easily access the event structure, and combine Parameters to get secrets to perform secret operations.
from aws_lambda_powertools.utilities import parameters
from aws_lambda_powertools.utilities.data_classes import SecretsManagerEvent, event_source
secrets_provider = parameters.SecretsProvider()
@event_source(data_class=SecretsManagerEvent)
def lambda_handler(event: SecretsManagerEvent, context):
# Getting secret value using Parameter utility
# See https://docs.powertools.aws.dev/lambda/python/latest/utilities/parameters/
secret = secrets_provider.get(event.secret_id, VersionId=event.version_id, VersionStage="AWSCURRENT")
if event.step == "setSecret":
# Perform any secret rotation logic, e.g., change DB password
# Check more examples: https://github.com/aws-samples/aws-secrets-manager-rotation-lambdas
print("Rotating secret...")
return secret
dd1770c
to c4890ab
in /docs (#3078) by @dependabotdd1770c
to c4890ab
in /docs (#3078) by @dependabot@TonySherman, @dependabot, @dependabot[bot], @github-actions, @github-actions[bot], @heitorlessa, @leandrodamascena, @roger-zhangg and @sthulb
This release brings custom serialization/deserialization to Idempotency, and Middleware support in Event Handler (API Gateway REST/HTTP, ALB, Lambda Function URL, VPC Lattice). Oh didn't I say some bug fixes too? π
π Big welcome to the new contributors: @adriantomas, @aradyaron, @nejcskofic, @waveFrontSet
Docs π Huge thanks to @aradyaron!!
Previously, any function annotated with @idempotent_function
will have its return type as a JSON object - this was challenging for customers using Pydantic, Dataclasses, or any custom types.
You can now use output_serializer
to automatically serialize the return type for Dataclasses or Pydantic, and bring your own serializer/deserializer too!
from aws_lambda_powertools.utilities.idempotency import (
DynamoDBPersistenceLayer,
IdempotencyConfig,
idempotent_function,
)
from aws_lambda_powertools.utilities.idempotency.serialization.pydantic import PydanticSerializer
from aws_lambda_powertools.utilities.parser import BaseModel
from aws_lambda_powertools.utilities.typing import LambdaContext
dynamodb = DynamoDBPersistenceLayer(table_name="IdempotencyTable")
config = IdempotencyConfig(event_key_jmespath="order_id") # see Choosing a payload subset section
class OrderItem(BaseModel):
sku: str
description: str
class Order(BaseModel):
item: OrderItem
order_id: int
class OrderOutput(BaseModel):
order_id: int
@idempotent_function(
data_keyword_argument="order",
config=config,
persistence_store=dynamodb,
output_serializer=PydanticSerializer,
)
# order output is inferred from return type
def process_order(order: Order) -> OrderOutput:
return OrderOutput(order_id=order.order_id)
def lambda_handler(event: dict, context: LambdaContext):
config.register_lambda_context(context) # see Lambda timeouts section
order_item = OrderItem(sku="fake", description="sample")
order = Order(item=order_item, order_id=1)
# `order` parameter must be called as a keyword argument to work
process_order(order=order)
Docs π Huge thanks to @walmsles for the implementation and marvelous illustrations!!
You can now bring your own middleware to run logic before or after requests when using Event Handler.
The goal continues to be having built-in features over middlewares, so you don't have to own boilerplate code. That said, we recognize we can't virtually cover every use case - that's where middleware comes in!
Example using per-route and global middlewares
import middleware_global_middlewares_module
import requests
from aws_lambda_powertools import Logger
from aws_lambda_powertools.event_handler import APIGatewayRestResolver, Response
app = APIGatewayRestResolver()
logger = Logger()
app.use(middlewares=[middleware_global_middlewares_module.log_request_response])
@app.get("/todos", middlewares=[middleware_global_middlewares_module.inject_correlation_id])
def get_todos():
todos: Response = requests.get("https://jsonplaceholder.typicode.com/todos")
todos.raise_for_status()
return {"todos": todos.json()[:10]}
@logger.inject_lambda_context
def lambda_handler(event, context):
return app.resolve(event, context)
f4764d1
to dd1770c
in /docs (#3044) by @dependabotb1f7f94
to f4764d1
in /docs (#3031) by @dependabot97da15b
to b1f7f94
in /docs (#3021) by @dependabotf4764d1
to dd1770c
in /docs (#3044) by @dependabotb1f7f94
to f4764d1
in /docs (#3031) by @dependabot97da15b
to b1f7f94
in /docs (#3021) by @dependabot@adriantomas, @aradyaron, @dependabot, @dependabot[bot], @github-actions, @github-actions[bot], @nejcskofic, @walmsles and @waveFrontSet
This patch release primarily addresses a fix for customers who utilize default tags and metric-specific tags within the Datadog Metrics provider. Tags are now merged seamlessly, effectively resolving precedence conflicts that can arise when using tags with the same key.
The newly generated metric is now:
{
"m": "SuccessfulBooking",
"v": 1,
"e": 1692736997,
"t": [
"product:ticket"
"flight:AB123",
]
}
:star2: Huge thanks to @ecokes for reporting and reproducing it.
cd3a522
to 97da15b
in /docs (#2987) by @dependabotcd3a522
to 97da15b
in /docs (#2987) by @dependabot@dependabot, @dependabot[bot], @github-actions, @github-actions[bot], @leandrodamascena and @rubenfonseca