home / mcp / translator-ai mcp server
π Multi-provider AI translation tool for JSON i18n files. Features: Google Gemini & Ollama support, incremental caching, multi-file deduplication, MCP server, batch processing, and cross-platform compatibility. Ideal for developers managing multilingual applications.
Configuration
View docs{
"mcpServers": {
"datanoisetv-translator-ai": {
"command": "npx",
"args": [
"-y",
"translator-ai-mcp"
],
"env": {
"GEMINI_API_KEY": "YOUR_GEMINI_API_KEY_HERE",
"TRANSLATOR_PROVIDER": "ollama"
}
}
}
}You can run translator-ai as an MCP server to empower AI assistants to translate JSON files directly. By exposing a simple local process interface, you gain fast, cache-aware translations with multi-provider support, while keeping your JSON structure intact and minimizing API usage.
To use translator-ai as an MCP server, configure your MCP client to connect via a local stdio server. The server runs as a child process that you invoke from your MCP client, translating JSON content and returning results while preserving the original structure.
From your MCP client, you can initiate translation tasks by sending a request that specifies input files or patterns, a target language, and an output destination. The server handles deduplication across multiple files, caches previous translations, and can translate to multiple languages in a single run. You can also run in dry-run mode to preview translations without calling any providers.
Typical workflows include translating a single file, translating a batch of files with deduplication, or integrating with a publishing pipeline where content is translated as part of a build step. The server supports auto-detecting the source language, preserving URLs and other formats, and optionally emitting metadata to help track translations.
Prerequisites you need on your system are Node.js and npm. You may also need a local or remote translation provider configuration if you plan to translate via cloud providers.
Install the MCP server package globally so you can invoke it from any working directory.
Install the MCP server locally in your project if you prefer project-scoped usage.
To run translator-ai as an MCP server, the following configuration is provided as an example. It uses a local stdio command that the MCP client will spawn and communicate with.
{
"mcpServers": {
"translator_ai": {
"type": "stdio",
"name": "translator_ai",
"command": "npx",
"args": ["-y", "translator-ai-mcp"],
"env": {
"GEMINI_API_KEY": "YOUR_GEMINI_API_KEY_HERE",
"TRANSLATOR_PROVIDER": "ollama"
}
}
}
}From the MCP client side you can trigger these core actions exposed by translator-ai via the MCP interface: translating a single JSON file, translating multiple files with deduplication, and translating to multiple languages in one call. You can enable metadata in outputs, run in dry-run mode to preview translations, and verify that all source keys exist in the translated output.
When using cloud providers, keep your API keys secure. Do not hard-code secrets in source files. Use environment variable management practices and restrict access to cache files that store translation mappings.
Translate a single JSON file with language target and output path, used as the core translation operation within MCP workflows.
Translate multiple files with deduplication, supporting a common target language and an output pattern for organized results.
Preview translations without calling translation providers to estimate work and cache impact.
Optionally attach translation metadata to output files to help track source, target, and timing.
Verify that all source keys exist in the translated output to ensure completeness.