home / mcp / claims mcp server
Provides a local MCP server that extracts and verifies factual claims from text using a multi-stage pipeline and structured outputs.
Configuration
View docs{
"mcpServers": {
"adamgustavsson-claimsmcp": {
"command": "/path/to/your/claimify-env/bin/python",
"args": [
"/path/to/your/project/claimify_server.py"
],
"env": {
"LOG_FILE": "claimify_llm.log",
"LLM_MODEL": "gpt-4o-2024-08-06",
"LOG_OUTPUT": "stderr",
"LOG_LLM_CALLS": "true",
"OPENAI_API_KEY": "YOUR_OPENAI_API_KEY"
}
}
}
}You can run Claimify as a local MCP server to automatically extract verifiable factual claims from text, then organize and verify them in a structured, repeatable workflow. With MCP integration, you can connect this server to your preferred MCP clients to perform extraction, verification prompts, and claim reports directly within your workspace.
You will run the Claimify MCP server locally and connect it to your MCP client (such as Cursor) to extract, verify, and decompose factual claims. The server exposes two core capabilities: it can extract a set of decontextualized factual claims from input text, and it can generate a Claims Report skeleton (CLAIMS.md) ready for verification. Use these workflows to build trustworthy, auditable claim sets from source text.
- Start by loading your input text into the MCP client and selecting the Claimify server as the extraction tool. The server will split text into sentences, select verifiable propositions, resolve ambiguities, and decompose sentences into atomic claims. - Use the Verify Single Claim prompt to validate each extracted claim against authoritative sources. - Create a Claims Report with Create Claims Report to generate a TODO-labeled template that you fill in during verification.
Prerequisites you need to have before installing Claimify: Python 3.10 or newer, and an OpenAI API key if you plan to use structured outputs. You should also have a virtual environment tool available (optional but recommended). Ensure you can run Python commands from your shell.
1. Clone the project directory to your workspace and enter it.
2. Create and activate a Python virtual environment to isolate dependencies.
3. Install the required Python dependencies.
4. Set up environment variables for your OpenAI API and logging as needed.
You configure and run Claimify as an MCP stdio server. The recommended local runtime uses a Python virtual environment and the claimify_server.py script. The command, as shown in the configuration example, launches the Python interpreter from your virtual environment and runs the server script.
{
"type": "stdio",
"name": "claimify",
"command": "/path/to/your/claimify-env/bin/python",
"args": [
"/path/to/your/project/claimify_server.py"
]
}Set the following environment variables in your runtime environment to configure access and logging. Replace placeholders with your actual values.
OPENAI_API_KEY="your-openai-api-key-here"
LLM_MODEL="gpt-4o-2024-08-06"
LOG_LLM_CALLS="true"
LOG_OUTPUT="stderr"
LOG_FILE="claimify_llm.log"The server includes two primary tools for practical usage.
- Verify Single Claim: tests a decontextualized claim against authoritative sources and returns a status such as VERIFIED, UNCERTAIN, or DISPUTED with supporting evidence.
- Create Claims Report: generates an initial CLAIMS.md file with all extracted claims marked as TODO, ready for incremental verification.
The system stores extraction results as MCP resources for easy retrieval and supports multi-language input by preserving the original language while extracting claims. It includes robust logging of all pipeline stages and LLM calls for traceability and debugging.
Common issues you might encounter include model compatibility for structured outputs, missing API keys, missing NLTK data, and MCP client connection problems. Ensure you are using a compatible model, provide the API key, download necessary NLP data, and confirm the MCP client can connect to the local server with the correct paths.
For extending or adapting the server, you can add or modify structured models, prompts, and pipeline stages. Use the existing modular design to plug in new verification prompts or decomposition strategies and test changes with the built-in logging.
Prompts the LLM to verify a single factual claim against authoritative sources, returning a status (VERIFIED, UNCERTAIN, DISPUTED) and citations.
Parses the extraction resource and generates a CLAIMS.md file with all claims labeled as TODO for incremental verification.