home / skills / sickn33 / antigravity-awesome-skills / aws-serverless
This skill helps you design and deploy production-ready AWS serverless apps using Lambda, API Gateway, DynamoDB, and event-driven patterns.
npx playbooks add skill sickn33/antigravity-awesome-skills --skill aws-serverlessReview the files below or copy the command above to add this skill to your agents.
---
name: aws-serverless
description: "Specialized skill for building production-ready serverless applications on AWS. Covers Lambda functions, API Gateway, DynamoDB, SQS/SNS event-driven patterns, SAM/CDK deployment, and cold start optimization."
source: vibeship-spawner-skills (Apache 2.0)
---
# AWS Serverless
## Patterns
### Lambda Handler Pattern
Proper Lambda function structure with error handling
**When to use**: ['Any Lambda function implementation', 'API handlers, event processors, scheduled tasks']
```python
```javascript
// Node.js Lambda Handler
// handler.js
// Initialize outside handler (reused across invocations)
const { DynamoDBClient } = require('@aws-sdk/client-dynamodb');
const { DynamoDBDocumentClient, GetCommand } = require('@aws-sdk/lib-dynamodb');
const client = new DynamoDBClient({});
const docClient = DynamoDBDocumentClient.from(client);
// Handler function
exports.handler = async (event, context) => {
// Optional: Don't wait for event loop to clear (Node.js)
context.callbackWaitsForEmptyEventLoop = false;
try {
// Parse input based on event source
const body = typeof event.body === 'string'
? JSON.parse(event.body)
: event.body;
// Business logic
const result = await processRequest(body);
// Return API Gateway compatible response
return {
statusCode: 200,
headers: {
'Content-Type': 'application/json',
'Access-Control-Allow-Origin': '*'
},
body: JSON.stringify(result)
};
} catch (error) {
console.error('Error:', JSON.stringify({
error: error.message,
stack: error.stack,
requestId: context.awsRequestId
}));
return {
statusCode: error.statusCode || 500,
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
error: error.message || 'Internal server error'
})
};
}
};
async function processRequest(data) {
// Your business logic here
const result = await docClient.send(new GetCommand({
TableName: process.env.TABLE_NAME,
Key: { id: data.id }
}));
return result.Item;
}
```
```python
# Python Lambda Handler
# handler.py
import json
import os
import logging
import boto3
from botocore.exceptions import ClientError
# Initialize outside handler (reused across invocations)
logger = logging.getLogger()
logger.setLevel(logging.INFO)
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table(os.environ['TABLE_NAME'])
def handler(event, context):
try:
# Parse i
```
### API Gateway Integration Pattern
REST API and HTTP API integration with Lambda
**When to use**: ['Building REST APIs backed by Lambda', 'Need HTTP endpoints for functions']
```javascript
```yaml
# template.yaml (SAM)
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Globals:
Function:
Runtime: nodejs20.x
Timeout: 30
MemorySize: 256
Environment:
Variables:
TABLE_NAME: !Ref ItemsTable
Resources:
# HTTP API (recommended for simple use cases)
HttpApi:
Type: AWS::Serverless::HttpApi
Properties:
StageName: prod
CorsConfiguration:
AllowOrigins:
- "*"
AllowMethods:
- GET
- POST
- DELETE
AllowHeaders:
- "*"
# Lambda Functions
GetItemFunction:
Type: AWS::Serverless::Function
Properties:
Handler: src/handlers/get.handler
Events:
GetItem:
Type: HttpApi
Properties:
ApiId: !Ref HttpApi
Path: /items/{id}
Method: GET
Policies:
- DynamoDBReadPolicy:
TableName: !Ref ItemsTable
CreateItemFunction:
Type: AWS::Serverless::Function
Properties:
Handler: src/handlers/create.handler
Events:
CreateItem:
Type: HttpApi
Properties:
ApiId: !Ref HttpApi
Path: /items
Method: POST
Policies:
- DynamoDBCrudPolicy:
TableName: !Ref ItemsTable
# DynamoDB Table
ItemsTable:
Type: AWS::DynamoDB::Table
Properties:
AttributeDefinitions:
- AttributeName: id
AttributeType: S
KeySchema:
- AttributeName: id
KeyType: HASH
BillingMode: PAY_PER_REQUEST
Outputs:
ApiUrl:
Value: !Sub "https://${HttpApi}.execute-api.${AWS::Region}.amazonaws.com/prod"
```
```javascript
// src/handlers/get.js
const { getItem } = require('../lib/dynamodb');
exports.handler = async (event) => {
const id = event.pathParameters?.id;
if (!id) {
return {
statusCode: 400,
body: JSON.stringify({ error: 'Missing id parameter' })
};
}
const item =
```
### Event-Driven SQS Pattern
Lambda triggered by SQS for reliable async processing
**When to use**: ['Decoupled, asynchronous processing', 'Need retry logic and DLQ', 'Processing messages in batches']
```python
```yaml
# template.yaml
Resources:
ProcessorFunction:
Type: AWS::Serverless::Function
Properties:
Handler: src/handlers/processor.handler
Events:
SQSEvent:
Type: SQS
Properties:
Queue: !GetAtt ProcessingQueue.Arn
BatchSize: 10
FunctionResponseTypes:
- ReportBatchItemFailures # Partial batch failure handling
ProcessingQueue:
Type: AWS::SQS::Queue
Properties:
VisibilityTimeout: 180 # 6x Lambda timeout
RedrivePolicy:
deadLetterTargetArn: !GetAtt DeadLetterQueue.Arn
maxReceiveCount: 3
DeadLetterQueue:
Type: AWS::SQS::Queue
Properties:
MessageRetentionPeriod: 1209600 # 14 days
```
```javascript
// src/handlers/processor.js
exports.handler = async (event) => {
const batchItemFailures = [];
for (const record of event.Records) {
try {
const body = JSON.parse(record.body);
await processMessage(body);
} catch (error) {
console.error(`Failed to process message ${record.messageId}:`, error);
// Report this item as failed (will be retried)
batchItemFailures.push({
itemIdentifier: record.messageId
});
}
}
// Return failed items for retry
return { batchItemFailures };
};
async function processMessage(message) {
// Your processing logic
console.log('Processing:', message);
// Simulate work
await saveToDatabase(message);
}
```
```python
# Python version
import json
import logging
logger = logging.getLogger()
def handler(event, context):
batch_item_failures = []
for record in event['Records']:
try:
body = json.loads(record['body'])
process_message(body)
except Exception as e:
logger.error(f"Failed to process {record['messageId']}: {e}")
batch_item_failures.append({
'itemIdentifier': record['messageId']
})
return {'batchItemFailures': batch_ite
```
## Anti-Patterns
### ❌ Monolithic Lambda
**Why bad**: Large deployment packages cause slow cold starts.
Hard to scale individual operations.
Updates affect entire system.
### ❌ Large Dependencies
**Why bad**: Increases deployment package size.
Slows down cold starts significantly.
Most of SDK/library may be unused.
### ❌ Synchronous Calls in VPC
**Why bad**: VPC-attached Lambdas have ENI setup overhead.
Blocking DNS lookups or connections worsen cold starts.
## ⚠️ Sharp Edges
| Issue | Severity | Solution |
|-------|----------|----------|
| Issue | high | ## Measure your INIT phase |
| Issue | high | ## Set appropriate timeout |
| Issue | high | ## Increase memory allocation |
| Issue | medium | ## Verify VPC configuration |
| Issue | medium | ## Tell Lambda not to wait for event loop |
| Issue | medium | ## For large file uploads |
| Issue | high | ## Use different buckets/prefixes |
This skill specializes in building production-ready serverless applications on AWS. It focuses on Lambda function structure, API Gateway integration, DynamoDB access patterns, and event-driven processing with SQS/SNS. It also covers deployment with SAM/CDK and strategies to reduce cold starts and operational risk.
The skill provides patterns, example handlers, and infrastructure templates that show how to initialize resources outside the handler, parse events, handle errors, and return API Gateway-compatible responses. It includes SAM templates for HTTP APIs, DynamoDB table configuration, and SQS-triggered Lambda patterns with partial batch failure handling. Guidance on anti-patterns and "sharp edges" helps you tune timeouts, memory, VPC settings, and retry/DLQ behavior.
How do I reduce Lambda cold start time?
Initialize heavy clients outside the handler, minimize package size, avoid attaching to VPC unless necessary, and consider increasing memory (which speeds CPU) or using provisioned concurrency for critical endpoints.
When should I use SQS vs SNS?
Use SNS for fan-out push notifications to multiple subscribers. Use SQS for durable, decoupled, retryable, and ordered message processing where consumers poll and process batches.
How do I handle partial failures when processing SQS batches?
Enable ReportBatchItemFailures and return identifiers of failed records so only failed items are retried. Log failure context and push unrecoverable items to a DLQ for later inspection.