home / mcp / gemimi mcp server
gemini-mcp-server
Configuration
View docs{
"mcpServers": {
"lucky-dersan-gemini-mcp-server": {
"command": "docker",
"args": [
"run",
"--rm",
"-i",
"--network",
"host",
"-e",
"GEMINI_API_KEY",
"-e",
"GEMINI_MODEL",
"-e",
"GEMINI_BASE_URL",
"-e",
"HTTP_PROXY",
"-e",
"HTTPS_PROXY",
"gemini-mcp-server:latest"
],
"env": {
"HTTP_PROXY": "http://127.0.0.1:17890",
"HTTPS_PROXY": "http://127.0.0.1:17890",
"GEMINI_MODEL": "gemini-2.5-flash",
"GEMINI_API_KEY": "your_api_key_here",
"GEMINI_BASE_URL": "https://generativelanguage.googleapis.com/v1beta/openai/"
}
}
}
}You run a Gemini-compatible Model Context Protocol (MCP) server to connect your Gemimi or Gemini integration with remote model providers. This server is written in Python and leverages FastMCP to handle MCP requests, enabling you to process prompts and generate responses through a configurable, container-friendly workflow.
To use this MCP server, you run it as a stdio-based service that you connect to from an MCP client. Start the server through your preferred orchestration layer (for example, via Docker) and configure your MCP client to send requests to the running container. You can adjust environment variables to point to your Gemini API, model, and base URL, and you can use Docker networking to ensure connectivity with your client.
Prerequisites you need before getting started: Docker installed on your host machine. Ensure you have access to build Docker images and run containers.
Build the MCP server image with Docker:
Provides a Docker-based runtime that launches the Python MCP server in an isolated container, enabling MCP requests from clients via stdio.