home / skills / secondsky / claude-skills / cloudflare-agents

This skill helps you build AI agents on Cloudflare Workers by integrating tools, MCP, and multiple LLM providers for scalable automation.

npx playbooks add skill secondsky/claude-skills --skill cloudflare-agents

Review the files below or copy the command above to add this skill to your agents.

Files (23)
SKILL.md
2.7 KB
---
name: cloudflare-agents
description: Build AI agents on Cloudflare Workers with MCP integration, tool use, and LLM providers.
license: MIT
---

# Cloudflare Agents

**Last Updated**: 2025-11-21

## Quick Start

```typescript
export default {
  async fetch(request, env, ctx) {
    const agent = {
      tools: [
        { name: 'getTodo', handler: async ({id}) => ({id, title: 'Task'}) }
      ],
      async run(input) {
        return await processWithLLM(input, this.tools);
      }
    };
    
    return Response.json(await agent.run(await request.text()));
  }
};
```

## Core Features

- **Tool Integration**: Register and execute tools
- **LLM Providers**: OpenAI, Anthropic, Google Gemini
- **MCP Protocol**: Model Context Protocol support
- **Cloudflare Bindings**: D1, KV, R2, Durable Objects

## Agent Pattern

```typescript
const agent = {
  tools: [...],
  systemPrompt: 'You are a helpful assistant',
  model: 'gpt-4o',
  async run(input) {
    // Process with LLM
  }
};
```

## Resources

### Core Documentation
- `references/patterns-concepts.md` (317 lines) - What is Cloudflare Agents, patterns & concepts, critical rules, known issues prevention
- `references/configuration-guide.md` (152 lines) - Complete configuration deep dive
- `references/agent-api.md` (115 lines) - Complete Agent Class API reference

### Integration Guides
- `references/http-sse-guide.md` (74 lines) - HTTP & Server-Sent Events
- `references/websockets-guide.md` (110 lines) - WebSocket integration
- `references/state-management.md` (388 lines) - State management, scheduled tasks, workflows
- `references/mcp-integration.md` (130 lines) - Model Context Protocol integration

### Advanced Features
- `references/advanced-features.md` (637 lines) - Browser automation, RAG, AI model integration, calling agents, client APIs

### Error Reference
- `references/error-catalog.md` (10 lines) - Common errors and solutions

### Templates
- `templates/basic-agent.ts` - Basic agent setup
- `templates/browser-agent.ts` - Browser automation
- `templates/calling-agents-worker.ts` - Calling other agents
- `templates/chat-agent-streaming.ts` - Streaming chat agent
- `templates/hitl-agent.ts` - Human-in-the-loop
- `templates/mcp-server-basic.ts` - MCP server integration
- `templates/rag-agent.ts` - RAG implementation
- `templates/react-useagent-client.tsx` - React client integration
- `templates/scheduled-agent.ts` - Scheduled tasks
- `templates/state-sync-agent.ts` - State synchronization
- `templates/websocket-agent.ts` - WebSocket agent
- `templates/workflow-agent.ts` - Workflows integration
- `templates/wrangler-agents-config.jsonc` - Wrangler configuration

**Official Docs**: https://developers.cloudflare.com/cloudflare-for-platforms/cloudflare-agents/

Overview

This skill packages production-ready patterns for building AI agents on Cloudflare Workers with MCP integration, tool orchestration, and multiple LLM providers. It includes templates, integrations, and opinionated defaults to accelerate deploying agents that use Cloudflare bindings like D1, KV, R2, and Durable Objects. The goal is fast, secure, and scalable agent deployments for web and edge use cases.

How this skill works

The skill defines an agent pattern with a tools registry, system prompt, and model selection. Agents run inside Cloudflare Workers and call configured LLM providers (OpenAI, Anthropic, Google Gemini) while optionally exposing an MCP-compatible context protocol. Templates and helper modules handle tool invocation, streaming, SSE/WebSocket I/O, and state synchronization via Cloudflare bindings. You drop an agent into a Worker, register tools and bindings, and the runtime orchestrates LLM calls, tool execution, and context management.

When to use it

  • Deploy edge-native AI agents that need low-latency inference and proximity to users.
  • Build multi-tool agents that call internal APIs, databases, or object stores (D1, R2, KV).
  • Integrate MCP for standardized model context and multi-model workflows.
  • Implement streaming chat, server-sent events, or WebSocket-based agent UIs.
  • Create scheduled workflows, human-in-the-loop tasks, or RAG pipelines at the edge.

Best practices

  • Define a minimal, strict systemPrompt and explicit tool interfaces to limit model actions.
  • Use Cloudflare bindings for persistent state and large-object storage; avoid storing secrets in code.
  • Enable MCP when sharing context between agents or models to keep traces reproducible.
  • Prefer streaming templates for chat interfaces to improve perceived latency.
  • Start with provided templates (basic, rag, streaming) and adapt incrementally for production stability.

Example use cases

  • Edge customer support bot that queries D1 for account data and R2 for attachments.
  • RAG agent that indexes documents into KV/R2 and answers queries with contextual retrieval.
  • Scheduled agent that runs nightly workflows and writes results to D1 for reporting.
  • Browser automation agent that controls headless tasks via Workers and streams progress to clients.
  • Agent orchestration where Workers call other agents and share context via the MCP server.

FAQ

Which LLM providers are supported?

Built-in support exists for OpenAI, Anthropic, and Google Gemini; provider adapters are modular for custom integrations.

Can I use Cloudflare data bindings?

Yes. The skill includes patterns and templates for D1, KV, R2, and Durable Objects for state, caching, and storage.