translator-ai is a powerful JSON i18n translation tool that supports multiple AI providers including Google Gemini, OpenAI, and Ollama/DeepSeek. It features intelligent caching, multi-file deduplication, and batch processing to optimize translation efficiency.
npm install -g translator-ai
npm install translator-ai
Create a .env
file in your project root or set the environment variable:
GEMINI_API_KEY=your_gemini_api_key_here
Get your API key from Google AI Studio.
Create a .env
file in your project root or set the environment variable:
OPENAI_API_KEY=your_openai_api_key_here
Get your API key from OpenAI Platform.
For completely local translation without API costs:
ollama pull deepseek-r1:latest
--provider ollama
flag:
translator-ai source.json -l es -o spanish.json --provider ollama
# Translate a single file
translator-ai source.json -l es -o spanish.json
# Translate multiple files with deduplication
translator-ai src/locales/en/*.json -l es -o "{dir}/{name}.{lang}.json"
# Use glob patterns
translator-ai "src/**/*.en.json" -l fr -o "{dir}/{name}.fr.json"
translator-ai <inputFiles...> [options]
Arguments:
inputFiles Path(s) to source JSON file(s) or glob patterns
Options:
-l, --lang <langCodes> Target language code(s), comma-separated for multiple
-o, --output <pattern> Output file path or pattern
--stdout Output to stdout instead of file
--stats Show detailed performance statistics
--no-cache Disable incremental translation cache
--cache-file <path> Custom cache file path
--provider <type> Translation provider: gemini, openai, or ollama (default: gemini)
--ollama-url <url> Ollama API URL (default: http://localhost:11434)
--ollama-model <model> Ollama model name (default: deepseek-r1:latest)
--gemini-model <model> Gemini model name (default: gemini-2.0-flash-lite)
--openai-model <model> OpenAI model name (default: gpt-4o-mini)
--list-providers List available translation providers
--verbose Enable verbose output for debugging
--detect-source Auto-detect source language instead of assuming English
--dry-run Preview what would be translated without making API calls
--preserve-formats Preserve URLs, emails, numbers, dates, and other formats
--metadata Add translation metadata to output files (may break some i18n parsers)
--sort-keys Sort output JSON keys alphabetically
--check-keys Verify all source keys exist in output (exit with error if keys are missing)
translator-ai en.json -l es -o es.json
# All JSON files in a directory
translator-ai locales/en/*.json -l es -o "locales/es/{name}.json"
# Recursive glob pattern
translator-ai "src/**/en.json" -l fr -o "{dir}/fr.json"
# Shows statistics including how many API calls were saved
translator-ai src/i18n/*.json -l ja -o "{dir}/{name}.{lang}.json" --stats
translator-ai en.json -l de --stdout > de.json
translator-ai en.json -l ja -o ja.json --no-cache
translator-ai en.json -l ko -o ko.json --cache-file /path/to/cache.json
# Basic usage with Ollama
translator-ai en.json -l es -o es.json --provider ollama
# Use a different Ollama model
translator-ai en.json -l fr -o fr.json --provider ollama --ollama-model llama2:latest
# Detect source language automatically
translator-ai content.json -l es -o spanish.json --detect-source
# Translate to multiple languages at once
translator-ai en.json -l es,fr,de,ja -o translations/{lang}.json
# Dry run - see what would be translated without making API calls
translator-ai en.json -l es -o es.json --dry-run
# Preserve formats (URLs, emails, dates, numbers, template variables)
translator-ai app.json -l fr -o app-fr.json --preserve-formats
# Include translation metadata
translator-ai en.json -l fr -o fr.json --metadata
# Sort keys alphabetically for consistent output
translator-ai en.json -l fr -o fr.json --sort-keys
# Verify all keys are present in the translation
translator-ai en.json -l fr -o fr.json --check-keys
The --gemini-model
option allows you to choose from various Gemini models:
gemini-2.0-flash-lite
(default) - Fast and efficient for most translationsgemini-2.5-flash
- Enhanced performance with newer capabilitiesgemini-pro
- More sophisticated understanding for complex translationsExample:
translator-ai en.json -l es -o es.json --gemini-model gemini-2.5-flash
The --openai-model
option allows you to choose from various OpenAI models:
gpt-4o-mini
(default) - Cost-effective and fast for most translationsgpt-4o
- Most capable model with advanced understandinggpt-3.5-turbo
- Fast and efficient for simpler translationsExample:
translator-ai en.json -l ja -o ja.json --provider openai --openai-model gpt-4o
translator-ai can be used as an MCP server, allowing AI assistants like Claude Desktop to translate files directly.
Add to your Claude Desktop configuration:
macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"translator-ai": {
"command": "npx",
"args": [
"-y",
"translator-ai-mcp"
],
"env": {
"GEMINI_API_KEY": "your-gemini-api-key-here"
// Or for Ollama:
// "TRANSLATOR_PROVIDER": "ollama"
}
}
}
}
Once configured, you can ask Claude to translate files:
Human: Can you translate my English locale file to Spanish?
Claude: I'll translate your English locale file to Spanish using translator-ai.
<use_tool name="translate_json">
{
"inputFile": "locales/en.json",
"targetLanguage": "es",
"outputFile": "locales/es.json"
}
</use_tool>
Successfully translated! The file has been saved to locales/es.json.
To add this MCP server to Claude Code, run this command in your terminal:
claude mcp add-json "translator-ai" '{"command":"npx","args":["-y","translator-ai-mcp"],"env":{"GEMINI_API_KEY":"your-gemini-api-key-here"}}'
See the official Claude Code MCP documentation for more details.
There are two ways to add an MCP server to Cursor. The most common way is to add the server globally in the ~/.cursor/mcp.json
file so that it is available in all of your projects.
If you only need the server in a single project, you can add it to the project instead by creating or adding it to the .cursor/mcp.json
file.
To add a global MCP server go to Cursor Settings > Tools & Integrations and click "New MCP Server".
When you click that button the ~/.cursor/mcp.json
file will be opened and you can add your server like this:
{
"mcpServers": {
"translator-ai": {
"command": "npx",
"args": [
"-y",
"translator-ai-mcp"
],
"env": {
"GEMINI_API_KEY": "your-gemini-api-key-here"
}
}
}
}
To add an MCP server to a project you can create a new .cursor/mcp.json
file or add it to the existing one. This will look exactly the same as the global MCP server example above.
Once the server is installed, you might need to head back to Settings > MCP and click the refresh button.
The Cursor agent will then be able to see the available tools the added MCP server has available and will call them when it needs to.
You can also explicitly ask the agent to use the tool by mentioning the tool name and describing what the function does.
To add this MCP server to Claude Desktop:
1. Find your configuration file:
~/Library/Application Support/Claude/claude_desktop_config.json
%APPDATA%\Claude\claude_desktop_config.json
~/.config/Claude/claude_desktop_config.json
2. Add this to your configuration file:
{
"mcpServers": {
"translator-ai": {
"command": "npx",
"args": [
"-y",
"translator-ai-mcp"
],
"env": {
"GEMINI_API_KEY": "your-gemini-api-key-here"
}
}
}
}
3. Restart Claude Desktop for the changes to take effect