This MCP (Model Context Protocol) server implementation provides a standardized way to interact with AI models. It serves as a compatible server for MCP-enabled clients, allowing you to deploy and interact with various AI models through a unified API.
You can install the MCP server using pip:
pip install nuke-mcp
For development or to get the latest version, you can install directly from the GitHub repository:
pip install git+https://github.com/TheNukeGame/nuke-mcp-2.git
To start the MCP server with default settings:
python -m mcp.server
The server will start on port 8000 by default. You can specify a different port using the --port
argument:
python -m mcp.server --port 8080
You can configure the server through a YAML configuration file. Create a file named config.yaml
with the following structure:
models:
- id: my-model
path: /path/to/your/model
type: llama
options:
max_tokens: 2048
temperature: 0.7
- id: another-model
path: /path/to/another/model
type: gpt
options:
max_tokens: 4096
Then start the server with the configuration:
python -m mcp.server --config config.yaml
The MCP server provides the following key endpoints:
Here's an example of making a chat completion request:
curl -X POST http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "my-model",
"messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello, how are you?"}
],
"temperature": 0.7,
"max_tokens": 100
}'
You can interact with the MCP server using any OpenAI-compatible client library by setting the base URL to your MCP server address:
import openai
client = openai.OpenAI(
base_url="http://localhost:8000/v1",
api_key="dummy-key" # MCP doesn't require authentication by default
)
response = client.chat.completions.create(
model="my-model",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Write a haiku about programming."}
]
)
print(response.choices[0].message.content)
--port
argumentThere are two ways to add an MCP server to Cursor. The most common way is to add the server globally in the ~/.cursor/mcp.json
file so that it is available in all of your projects.
If you only need the server in a single project, you can add it to the project instead by creating or adding it to the .cursor/mcp.json
file.
To add a global MCP server go to Cursor Settings > MCP and click "Add new global MCP server".
When you click that button the ~/.cursor/mcp.json
file will be opened and you can add your server like this:
{
"mcpServers": {
"cursor-rules-mcp": {
"command": "npx",
"args": [
"-y",
"cursor-rules-mcp"
]
}
}
}
To add an MCP server to a project you can create a new .cursor/mcp.json
file or add it to the existing one. This will look exactly the same as the global MCP server example above.
Once the server is installed, you might need to head back to Settings > MCP and click the refresh button.
The Cursor agent will then be able to see the available tools the added MCP server has available and will call them when it needs to.
You can also explictly ask the agent to use the tool by mentioning the tool name and describing what the function does.