Pocket MCP Manager provides a flexible system for managing Model Context Protocol (MCP) servers through a central interface. This system enables users to add multiple MCP servers, launch them selectively, generate API keys, and connect to them through a single proxy server in AI tools like Claude or Cursor.
To install the server component:
# Clone the repository
git clone [email protected]:dailydaniel/pocket-mcp.git
cd pocket-mcp/server
# Install dependencies
npm install
Add the following configuration to your Claude Desktop settings:
{
"mcpServers": {
"mcp-proxy": {
"command": "node",
"args": ["/full/path/to/pocket-mcp/server/build/index.js"],
"env": {
"MCP_API_KEY": "api_key_from_client",
"CLIENT_API_URL": "http://localhost:<port>/api"
}
}
}
}
Make sure to replace:
/full/path/to/pocket-mcp/server/build/index.js
with the absolute path to your server's build/index.js fileapi_key_from_client
with the API key generated from the client UI<port>
with the port shown in the API server logs (typically 8000)The client provides a web-based UI for managing your MCP servers:
# Navigate to the client directory
cd pocket-mcp/client
# Create and activate a virtual environment
python -m venv .venv --prompt "mcp-venv"
source .venv/bin/activate
# Install requirements
pip install -r requirements.txt
# Copy the example config
cp servers_config_example.json servers_config.json
# Edit the configuration with your MCP servers
vim servers_config.json
# Run the client
streamlit run app.py
Create a servers_config.json
file in the client directory with your MCP servers:
{
"mcpServers": {
"jetbrains": {
"command": "npx",
"args": ["-y", "@jetbrains/mcp-proxy"]
},
"logseq": {
"command": "uvx",
"args": ["mcp-server-logseq"],
"env": {
"LOGSEQ_API_TOKEN": "API_KEY",
"LOGSEQ_API_URL": "http://127.0.0.1:<port>"
}
},
"brave-search": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-brave-search"
],
"env": {
"BRAVE_API_KEY": "API_KEY"
}
}
}
}
Replace API_KEY
and <port>
with your actual values.
The repository includes an example client for chatting with LLMs using OpenAI API and MCP servers.
# Navigate to the example client directory
cd pocket-mcp/example_llm_mcp
# Copy the example environment file
cp .env.example .env
# Edit the .env file and add your OpenAI API key
vim .env
# Add server configurations to the servers_config.json file
cp servers_config_example.json servers_config.json
# Add API key from the client
vim servers_config.json
# If not already in a virtual environment
# May use the same virtual environment as for the client
source ../client/.venv/bin/activate
# Install requirements
pip install -r requirements.txt
# Run the client
python3 main.py
The example client will connect to your running MCP servers and allow you to chat with an LLM while utilizing MCP capabilities.
There are two ways to add an MCP server to Cursor. The most common way is to add the server globally in the ~/.cursor/mcp.json
file so that it is available in all of your projects.
If you only need the server in a single project, you can add it to the project instead by creating or adding it to the .cursor/mcp.json
file.
To add a global MCP server go to Cursor Settings > MCP and click "Add new global MCP server".
When you click that button the ~/.cursor/mcp.json
file will be opened and you can add your server like this:
{
"mcpServers": {
"cursor-rules-mcp": {
"command": "npx",
"args": [
"-y",
"cursor-rules-mcp"
]
}
}
}
To add an MCP server to a project you can create a new .cursor/mcp.json
file or add it to the existing one. This will look exactly the same as the global MCP server example above.
Once the server is installed, you might need to head back to Settings > MCP and click the refresh button.
The Cursor agent will then be able to see the available tools the added MCP server has available and will call them when it needs to.
You can also explictly ask the agent to use the tool by mentioning the tool name and describing what the function does.