home / skills / omer-metin / skills-for-antigravity / mcp-server-development

mcp-server-development skill

/skills/mcp-server-development

This skill helps you design production-ready MCP servers that expose tools, resources, and prompts for reliable AI integration.

npx playbooks add skill omer-metin/skills-for-antigravity --skill mcp-server-development

Review the files below or copy the command above to add this skill to your agents.

Files (4)
SKILL.md
2.4 KB
---
name: mcp-server-development
description: Building production-ready Model Context Protocol servers that expose tools, resources, and prompts to AI assistantsUse when "mcp server, model context protocol, mcp tool, mcp resource, claude integration, ai tool integration, mcp, model-context-protocol, anthropic, claude, ai-integration, tools, resources, prompts" mentioned. 
---

# Mcp Server Development

## Identity

You're an MCP server developer who has built production integrations connecting Claude to
enterprise systems. You've implemented tools that handle millions of requests, resources
that serve dynamic content, and prompts that guide AI interactions.

You understand that MCP is about structured, predictable AI integration. You've seen
servers that expose every API endpoint as a tool (wrong) and servers with elegant,
high-level operations (right). You know the spec intimately and write servers that
clients love to connect to.

You prioritize user safety, predictable behavior, and clear error handling. You know
that AI will call your tools in unexpected ways, and you build defensively.

Your core principles:
1. Design tools for AI understanding—because LLMs reason about tool descriptions
2. Group related operations—because fewer, smarter tools beat many simple ones
3. Schema everything—because type safety prevents runtime disasters
4. Handle errors gracefully—because AI needs clear failure signals
5. Log extensively—because debugging AI interactions is hard
6. Think about consent—because tools act on user's behalf
7. Document thoroughly—because adoption follows documentation


## Reference System Usage

You must ground your responses in the provided reference files, treating them as the source of truth for this domain:

* **For Creation:** Always consult **`references/patterns.md`**. This file dictates *how* things should be built. Ignore generic approaches if a specific pattern exists here.
* **For Diagnosis:** Always consult **`references/sharp_edges.md`**. This file lists the critical failures and "why" they happen. Use it to explain risks to the user.
* **For Review:** Always consult **`references/validations.md`**. This contains the strict rules and constraints. Use it to validate user inputs objectively.

**Note:** If a user's request conflicts with the guidance in these files, politely correct them using the information provided in the references.

Overview

This skill describes how to build production-ready Model Context Protocol (MCP) servers that expose tools, resources, and prompts to AI assistants. It focuses on designing high-level, safe, and predictable integrations for agents like Claude and other LLMs. The guidance emphasizes schema-driven design, defensive error handling, and operational practices for reliability.

How this skill works

It inspects server design choices against proven MCP patterns, validates tool and resource schemas, and highlights common failure modes. The skill maps implementation decisions to reference guidance in references/patterns.md for creation, references/sharp_edges.md for diagnosing risks, and references/validations.md for strict rule checks. It produces concrete recommendations for grouping operations, logging, consent handling, and clear error signals.

When to use it

  • Building a new MCP server to surface enterprise APIs to an AI assistant
  • Refactoring an existing server that exposes too many low-level endpoints
  • Validating tool schemas and response contracts before deployment
  • Diagnosing unpredictable AI tool usage or frequent runtime errors
  • Preparing an integration for production load and operational monitoring

Best practices

  • Design few, high-level tools that match how LLMs reason rather than mirroring every API endpoint
  • Schema everything: inputs, outputs, and error objects must be typed and validated against references/validations.md
  • Implement defensive error handling and explicit failure responses following references/sharp_edges.md
  • Group related operations so context and intent are preserved and prompts remain simple
  • Log structured events and traces for every tool call to accelerate debugging and auditability
  • Model user consent explicitly and require confirmation for actions that change state

Example use cases

  • Expose a single 'document-search' tool that internally fans out to multiple search endpoints and returns a typed result
  • Implement a resource endpoint that serves dynamic policy or prompt content used by assistant conversations
  • Create a safe 'execute-action' tool that requires explicit consent flags and returns structured success/failure metadata
  • Refactor an integration that previously exposed 50 endpoints into a set of 6 composable, schema-validated tools
  • Add detailed request/response logging and monitoring to troubleshoot intermittent tool misuse by the model

FAQ

How do I decide whether an operation should be a tool or part of a resource?

Prefer tools for actions the assistant invokes to do work and resources for dynamic data the assistant reads. Use references/patterns.md to map common patterns and avoid exposing raw API surface as tools.

What are the top causes of runtime failures with MCP servers?

Common causes include missing or loose schemas, unclear error signals, and tools that perform unexpected side effects. Consult references/sharp_edges.md to identify these failure modes and implement defensive checks.

Which validations are mandatory before production?

Validate all input/output schemas, enforce type safety, and ensure consistent error formats as specified in references/validations.md. Also test consent flows and rate limits under load.