home / mcp / openai assistant mcp server
MCP server that gives Claude ability to use OpenAI's GPTs assistants
Configuration
View docs{
"mcpServers": {
"andybrandt-mcp-simple-openai-assistant": {
"command": "python",
"args": [
"-m",
"mcp_simple_openai_assistant"
],
"env": {
"OPENAI_API_KEY": "YOUR_API_KEY_HERE"
}
}
}
}You can deploy MCP Simple OpenAI Assistant to interact with OpenAI assistants through the Model Context Protocol. This server enables you to create, manage, and converse with assistants in real time, with local thread persistence to simplify reuse across sessions.
After you have the server running, you interact with it through your MCP client by using the available tools. The primary workflow is to start a new named conversation, locate the thread you want to continue, and ask questions to your chosen assistant within that thread. Real-time streaming updates keep you informed as the assistant generates responses, improving responsiveness during long or complex interactions.
Prerequisites you need before installing: Python (3.8+), and a running MCP client capable of interfacing with HTTP or stdio MCP servers.
Option A: Install via Smithery for automatic client integration with Claude Desktop.
npx -y @smithery/cli install mcp-simple-openai-assistant --client claudeOption B: Manual installation with PythonPackage.
pip install mcp-simple-openai-assistantYou must provide an OpenAI API key to enable the OpenAI models. The following local configurations show two environment setups for macOS and Windows. Use the one that matches your operating system.
{
"mcpServers": {
"openai-assistant": {
"command": "python",
"args": ["-m", "mcp_simple_openai_assistant"],
"env": {
"OPENAI_API_KEY": "your-api-key-here"
}
}
}
}Windows configuration example shows the explicit Python executable path. Use the key Python path that matches your system.
{
"mcpServers": {
"openai-assistant": {
"command": "C:\\Users\\YOUR_USERNAME\\AppData\\Local\\Programs\\Python\\Python311\\python.exe",
"args": ["-m", "mcp_simple_openai_assistant"],
"env": {
"OPENAI_API_KEY": "your-api-key-here"
}
}
}
}The server uses a streaming approach for the main interaction to provide real-time progress updates and avoid timeouts. It also persists conversation threads locally using a SQLite database so you can reuse threads across sessions. You can create a new thread, list existing threads, and delete threads as needed.
If you encounter issues starting the server, verify that Python is installed and that the OPENAI_API_KEY is accessible in your environment. Ensure you are using the correct Python executable path on Windows. If you experience connection problems, confirm that your MCP client is configured to point to the correct stdio MCP server and port.
Create a new OpenAI assistant with a name, instructions, and model specification.
List all assistants associated with your API key.
Get detailed information about a specific assistant by its ID.
Modify an existing assistant's name, instructions, or model.
Start a new persistent conversation thread with a user-defined name and description.
List locally managed conversation threads stored in the local SQLite database.
Delete a conversation thread from both OpenAI servers and the local database.
Send a message to an assistant within a thread and stream the response in real time.