home / mcp / mcp gemini server
Provides an MCP interface to Gemini by exposing actions for generate_text, analyze_text, and chat, with local configuration via a Python-based server.
Configuration
View docs{
"mcpServers": {
"amitsh06-mcp-server": {
"url": "http://localhost:5000/mcp",
"headers": {
"GEMINI_API_KEY": "YOUR_API_KEY_HERE"
}
}
}
}You can run a Python-based MCP server that exposes a standard MCP interface to Google's Gemini API, allowing clients to request text generation, analysis, and chat interactions through a consistent, secure protocol.
You connect an MCP client to the local Gemini MCP server to perform actions such as generate_text, analyze_text, and chat. Start a client request to the server’s MCP endpoint to receive structured results or error messages. The server runs a simple HTTP interface on localhost and expects MCP-style payloads, returning results in a consistent format.
Prerequisites you need before running the server:
Follow these steps to set up and run the server locally:
# Step 1: Set up a project directory
# (in a new or existing location)
# Step 2: Create and activate a virtual environment
python -m venv venv
# Windows
venv\Scripts\activate
# macOS/Linux
source venv/bin/activate
# Step 3: Install dependencies
pip install -r requirements.txt
# Step 4: Prepare environment variable for Gemini API
# Create a .env file in the project root with
GEMINI_API_KEY=your_api_key_here
# Step 5: Run the server
python server.pyThe server is designed to run on the default host and port and exposes an MCP endpoint at /mcp. It also provides a health check and model listing endpoint to help you verify the running state and available Gemini models.
For quick testing, you can send MCP requests to the /mcp endpoint using your preferred HTTP client. Validate responses by checking the result field for successful operations or the error field for any issues.
Store sensitive credentials such as the Gemini API key in a secure environment file (for example, a .env file) and avoid committing it to version control. The server reads GEMINI_API_KEY from the environment to authenticate with Gemini.
Generate text content by sending a prompt and optional parameters to Gemini via the MCP server.
Analyze input text with options for sentiment, summary, keywords, or general analysis.
Engage in a chat with Gemini by providing a sequence of messages and optional temperature control.
Check server health and verify that the MCP endpoint is responsive.
List available Gemini models exposed by the server.