AI Translation MCP server

Translates JSON internationalization files using multiple translation providers (Google Gemini, OpenAI, Ollama/DeepSeek) with intelligent caching, deduplication across files, and format preservation to minimize API costs while maintaining exact JSON structure and consistent results across target languages.
Back to servers
Setup instructions
Provider
DatanoiseTV
Release date
Jun 21, 2025
Language
Go
Stats
3 stars

translator-ai is a powerful JSON i18n translation tool that supports multiple AI providers including Google Gemini, OpenAI, and Ollama/DeepSeek. It features intelligent caching, multi-file deduplication, and batch processing to optimize translation efficiency.

Installation

Global Installation (Recommended)

npm install -g translator-ai

Local Installation

npm install translator-ai

Configuration

Option 1: Google Gemini API (Cloud)

Create a .env file in your project root or set the environment variable:

GEMINI_API_KEY=your_gemini_api_key_here

Get your API key from Google AI Studio.

Option 2: OpenAI API (Cloud)

Create a .env file in your project root or set the environment variable:

OPENAI_API_KEY=your_openai_api_key_here

Get your API key from OpenAI Platform.

Option 3: Ollama with DeepSeek-R1 (Local)

For completely local translation without API costs:

  1. Install Ollama
  2. Pull the DeepSeek-R1 model:
    ollama pull deepseek-r1:latest
    
  3. Use the --provider ollama flag:
    translator-ai source.json -l es -o spanish.json --provider ollama
    

Usage

Basic Usage

# Translate a single file
translator-ai source.json -l es -o spanish.json

# Translate multiple files with deduplication
translator-ai src/locales/en/*.json -l es -o "{dir}/{name}.{lang}.json"

# Use glob patterns
translator-ai "src/**/*.en.json" -l fr -o "{dir}/{name}.fr.json"

Command Line Options

translator-ai <inputFiles...> [options]

Arguments:
  inputFiles                   Path(s) to source JSON file(s) or glob patterns

Options:
  -l, --lang <langCodes>      Target language code(s), comma-separated for multiple
  -o, --output <pattern>      Output file path or pattern
  --stdout                    Output to stdout instead of file
  --stats                     Show detailed performance statistics
  --no-cache                  Disable incremental translation cache
  --cache-file <path>         Custom cache file path
  --provider <type>           Translation provider: gemini, openai, or ollama (default: gemini)
  --ollama-url <url>          Ollama API URL (default: http://localhost:11434)
  --ollama-model <model>      Ollama model name (default: deepseek-r1:latest)
  --gemini-model <model>      Gemini model name (default: gemini-2.0-flash-lite)
  --openai-model <model>      OpenAI model name (default: gpt-4o-mini)
  --list-providers            List available translation providers
  --verbose                   Enable verbose output for debugging
  --detect-source             Auto-detect source language instead of assuming English
  --dry-run                   Preview what would be translated without making API calls
  --preserve-formats          Preserve URLs, emails, numbers, dates, and other formats
  --metadata                  Add translation metadata to output files (may break some i18n parsers)
  --sort-keys                 Sort output JSON keys alphabetically
  --check-keys                Verify all source keys exist in output (exit with error if keys are missing)

Examples

Translate a single file

translator-ai en.json -l es -o es.json

Translate multiple files with pattern

# All JSON files in a directory
translator-ai locales/en/*.json -l es -o "locales/es/{name}.json"

# Recursive glob pattern
translator-ai "src/**/en.json" -l fr -o "{dir}/fr.json"

Translate with deduplication savings

# Shows statistics including how many API calls were saved
translator-ai src/i18n/*.json -l ja -o "{dir}/{name}.{lang}.json" --stats

Output to stdout (useful for piping)

translator-ai en.json -l de --stdout > de.json

Disable caching for fresh translation

translator-ai en.json -l ja -o ja.json --no-cache

Use custom cache location

translator-ai en.json -l ko -o ko.json --cache-file /path/to/cache.json

Use Ollama for local translation

# Basic usage with Ollama
translator-ai en.json -l es -o es.json --provider ollama

# Use a different Ollama model
translator-ai en.json -l fr -o fr.json --provider ollama --ollama-model llama2:latest

Advanced Features

# Detect source language automatically
translator-ai content.json -l es -o spanish.json --detect-source

# Translate to multiple languages at once
translator-ai en.json -l es,fr,de,ja -o translations/{lang}.json

# Dry run - see what would be translated without making API calls
translator-ai en.json -l es -o es.json --dry-run

# Preserve formats (URLs, emails, dates, numbers, template variables)
translator-ai app.json -l fr -o app-fr.json --preserve-formats

# Include translation metadata
translator-ai en.json -l fr -o fr.json --metadata

# Sort keys alphabetically for consistent output
translator-ai en.json -l fr -o fr.json --sort-keys

# Verify all keys are present in the translation
translator-ai en.json -l fr -o fr.json --check-keys

Model Options

Gemini Models

The --gemini-model option allows you to choose from various Gemini models:

  • gemini-2.0-flash-lite (default) - Fast and efficient for most translations
  • gemini-2.5-flash - Enhanced performance with newer capabilities
  • gemini-pro - More sophisticated understanding for complex translations

Example:

translator-ai en.json -l es -o es.json --gemini-model gemini-2.5-flash

OpenAI Models

The --openai-model option allows you to choose from various OpenAI models:

  • gpt-4o-mini (default) - Cost-effective and fast for most translations
  • gpt-4o - Most capable model with advanced understanding
  • gpt-3.5-turbo - Fast and efficient for simpler translations

Example:

translator-ai en.json -l ja -o ja.json --provider openai --openai-model gpt-4o

Using with Model Context Protocol (MCP)

translator-ai can be used as an MCP server, allowing AI assistants like Claude Desktop to translate files directly.

MCP Configuration

Add to your Claude Desktop configuration:

macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json

{
  "mcpServers": {
    "translator-ai": {
      "command": "npx",
      "args": [
        "-y",
        "translator-ai-mcp"
      ],
      "env": {
        "GEMINI_API_KEY": "your-gemini-api-key-here"
        // Or for Ollama:
        // "TRANSLATOR_PROVIDER": "ollama"
      }
    }
  }
}

MCP Usage Examples

Once configured, you can ask Claude to translate files:

Human: Can you translate my English locale file to Spanish?

Claude: I'll translate your English locale file to Spanish using translator-ai.

<use_tool name="translate_json">
{
  "inputFile": "locales/en.json",
  "targetLanguage": "es",
  "outputFile": "locales/es.json"
}
</use_tool>

Successfully translated! The file has been saved to locales/es.json.

How to install this MCP server

For Claude Code

To add this MCP server to Claude Code, run this command in your terminal:

claude mcp add-json "translator-ai" '{"command":"npx","args":["-y","translator-ai-mcp"],"env":{"GEMINI_API_KEY":"your-gemini-api-key-here"}}'

See the official Claude Code MCP documentation for more details.

For Cursor

There are two ways to add an MCP server to Cursor. The most common way is to add the server globally in the ~/.cursor/mcp.json file so that it is available in all of your projects.

If you only need the server in a single project, you can add it to the project instead by creating or adding it to the .cursor/mcp.json file.

Adding an MCP server to Cursor globally

To add a global MCP server go to Cursor Settings > Tools & Integrations and click "New MCP Server".

When you click that button the ~/.cursor/mcp.json file will be opened and you can add your server like this:

{
    "mcpServers": {
        "translator-ai": {
            "command": "npx",
            "args": [
                "-y",
                "translator-ai-mcp"
            ],
            "env": {
                "GEMINI_API_KEY": "your-gemini-api-key-here"
            }
        }
    }
}

Adding an MCP server to a project

To add an MCP server to a project you can create a new .cursor/mcp.json file or add it to the existing one. This will look exactly the same as the global MCP server example above.

How to use the MCP server

Once the server is installed, you might need to head back to Settings > MCP and click the refresh button.

The Cursor agent will then be able to see the available tools the added MCP server has available and will call them when it needs to.

You can also explicitly ask the agent to use the tool by mentioning the tool name and describing what the function does.

For Claude Desktop

To add this MCP server to Claude Desktop:

1. Find your configuration file:

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json
  • Linux: ~/.config/Claude/claude_desktop_config.json

2. Add this to your configuration file:

{
    "mcpServers": {
        "translator-ai": {
            "command": "npx",
            "args": [
                "-y",
                "translator-ai-mcp"
            ],
            "env": {
                "GEMINI_API_KEY": "your-gemini-api-key-here"
            }
        }
    }
}

3. Restart Claude Desktop for the changes to take effect

Want to 10x your AI skills?

Get a free account and learn to code + market your apps using AI (with or without vibes!).

Nah, maybe later