home / skills / sickn33 / antigravity-awesome-skills / agent-tool-builder

agent-tool-builder skill

/skills/agent-tool-builder

This skill helps design crystal-clear tool schemas and error handling to prevent hallucinations and improve LLM tool interactions.

This is most likely a fork of the agent-tool-builder skill from xfstudio
npx playbooks add skill sickn33/antigravity-awesome-skills --skill agent-tool-builder

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
1.6 KB
---
name: agent-tool-builder
description: "Tools are how AI agents interact with the world. A well-designed tool is the difference between an agent that works and one that hallucinates, fails silently, or costs 10x more tokens than necessary.  This skill covers tool design from schema to error handling. JSON Schema best practices, description writing that actually helps the LLM, validation, and the emerging MCP standard that's becoming the lingua franca for AI tools.  Key insight: Tool descriptions are more important than tool implementa"
source: vibeship-spawner-skills (Apache 2.0)
---

# Agent Tool Builder

You are an expert in the interface between LLMs and the outside world.
You've seen tools that work beautifully and tools that cause agents to
hallucinate, loop, or fail silently. The difference is almost always
in the design, not the implementation.

Your core insight: The LLM never sees your code. It only sees the schema
and description. A perfectly implemented tool with a vague description
will fail. A simple tool with crystal-clear documentation will succeed.

You push for explicit error hand

## Capabilities

- agent-tools
- function-calling
- tool-schema-design
- mcp-tools
- tool-validation
- tool-error-handling

## Patterns

### Tool Schema Design

Creating clear, unambiguous JSON Schema for tools

### Tool with Input Examples

Using examples to guide LLM tool usage

### Tool Error Handling

Returning errors that help the LLM recover

## Anti-Patterns

### ❌ Vague Descriptions

### ❌ Silent Failures

### ❌ Too Many Tools

## Related Skills

Works well with: `multi-agent-orchestration`, `api-designer`, `llm-architect`, `backend`

Overview

This skill teaches how to design robust tools that let AI agents interact reliably with external systems. It emphasizes clear JSON Schema, helpful descriptions, validation, and explicit error handling so agents avoid hallucination, silent failure, and excessive token use. The focus is on the interface and documentation that the LLM actually sees, not on backend implementation.

How this skill works

The skill inspects tool schemas, descriptions, and examples to ensure they are unambiguous and LLM-friendly. It audits JSON Schema for required fields, types, and constraints, checks examples for representativeness, and evaluates error responses for actionable recovery guidance. It also introduces MCP-compatible patterns to standardize tool metadata and improve interoperability.

When to use it

  • Designing new agent tools or function-call APIs intended for LLM consumption
  • Auditing existing tools that cause hallucinations, loops, or silent failures
  • Standardizing tool interfaces across teams using MCP or JSON Schema
  • Improving tool descriptions to reduce token costs and increase success rates
  • Implementing robust error handling and recovery strategies for agents

Best practices

  • Write concise, explicit descriptions that state purpose, inputs, outputs, and failure modes
  • Prefer narrow, well-typed JSON Schema over permissive schemas to avoid ambiguous LLM behavior
  • Include representative input/output examples to guide the model toward correct usage
  • Return structured, machine-readable errors with codes, causes, and suggested recovery steps
  • Limit the number of sibling tools; compose complex behavior from a few well-documented primitives

Example use cases

  • Designing a search tool schema with typed filters, required pagination, and example queries
  • Converting legacy endpoints into MCP-compatible tools with clear metadata and examples
  • Adding validation middleware that rejects malformed inputs and returns actionable error objects
  • Creating a tool set for a shopping agent where descriptions prevent price-lookup hallucinations
  • Auditing a tool collection to remove redundant tools and consolidate functionality

FAQ

Why focus on schema and descriptions rather than implementation?

The LLM never executes your code; it only reads the schema and description. Clear interface text guides the model to produce valid calls and reduces misinterpretation.

How should I structure errors for agents?

Return structured errors with an error_code, human_message, machine_hint, and optional remediation steps so the agent can decide whether to retry, ask for clarification, or fallback.