Provides a token-optimized JSON converter and context manager to reduce token usage in MCP workflows.
Configuration
View docs{
"mcpServers": {
"aj-geddes-toon-context-mcp": {
"command": "python",
"args": [
"-m",
"src.server"
]
}
}
}You can run the TOON-MCP server to automatically compress verbose JSON into Token-Optimized Object Notation (TOON), reducing token usage in AI-assisted workflows while preserving round-trip accuracy. This enables you to send smaller payloads to MCP clients and engines, speeding up conversations and improving token efficiency across your projects.
Set up the TOON-MCP server and connect it to your MCP client. Start the local server and point your client to run against it, so it can automatically convert JSON responses into TOON for transmission and back again when needed. Use the server to monitor token usage, analyze potential savings, and proactively optimize tool outputs during conversations. When you send data to the MCP client, the server provides a compact TOON representation, and when the client requires the original structure, it converts TOON back to JSON seamlessly.
Typical usage patterns include: converting API responses to TOON before sending them to the MCP client, analyzing token usage across conversations to identify optimization opportunities, and enabling auto-conversion for tool outputs to maximize efficiency. You can also configure pre-commit style checks to suggest TOON conversions for JSON files in your codebase.
Prerequisites: You need Python 3.10 or higher and the pip package manager.
Step 1: Clone the project repository.
git clone https://github.com/aj-geddes/toon-context-mcp.git
Step 2: Open the workstation and navigate to the MCP server folder.
cd toon-context-mcp/mcp-server-toon
Step 3: Install the TOON-MCP package in editable mode.
pip install -e .Step 4: Run the MCP server locally using Python.
python -m src.serverOptional: Build or run in Docker for easier deployment. Build the image and run a container, or start with Docker Compose as needed.
# Build the Docker image
docker build -t toon-mcp-server:latest .
# Run the container
docker run -i toon-mcp-server:latest
# Or start with Docker Compose
docker-compose up -dConfigure your MCP client to connect to the local TOON-MCP server. If you are using Claude Desktop or a similar MCP client, specify a stdio-based server with the Python runtime and the module path that hosts the server.
{
"mcpServers": {
"toon": {
"command": "python",
"args": ["-m", "src.server"],
"cwd": "/path/to/toon-context-mcp/mcp-server-toon"
}
}
}The server provides tools to convert between JSON and TOON, analyze optimization patterns, and calculate token savings. It also supports token monitoring to help you understand usage across conversations and suggests optimization opportunities.
Keep the server up to date with security patches and monitor token usage to avoid leaking sensitive payloads through TOON reductions. Regularly review optimization rules to ensure they do not alter meaning or critical fields.
If the server fails to start, verify that Python 3.10+ is installed and that you are in the correct directory. Check that the server module path matches the runtime command and that the dependencies are installed with pip. For Docker, ensure the container has network access and that the correct image tag is used.
The following is a representative configuration snippet you can adapt for your MCP client. It uses a local Python-based server and assumes you run the server from its standard script path.
{
"mcpServers": {
"toon": {
"command": "python",
"args": ["-m", "src.server"],
"cwd": "/path/to/toon-context-mcp/mcp-server-toon"
}
}
}Converts standard JSON structures into TOON format to reduce token usage in MCP communications.
Converts TOON back to JSON, enabling lossless round-trips between representations.
Detects optimization patterns in JSON data to identify opportunities for TOON compression.
Suggests an optimal TOON compression approach based on data characteristics.
Calculates token savings achievable by applying TOON conversion on given data.
Converts multiple JSON objects to TOON in a batch operation.
Monitors token usage across conversations and surfaces metrics and alerts.
Automatically optimizes tool outputs to maximize token efficiency.