home / mcp / ontology mcp server
Provides SPARQL querying, model control, and OpenAI/Gemini integrations for ontology data.
Configuration
View docs{
"mcpServers": {
"bigdata-coss-agent_mcp": {
"command": "node",
"args": [
"E:\\codes\\a2a_mcp\\build"
],
"env": {
"GEMINI_API_KEY": "YOUR_API_KEY",
"OPENAI_API_KEY": "YOUR_API_KEY",
"SPARQL_ENDPOINT": "http://localhost:7200"
}
}
}
}Ontology MCP connects a GraphDB SPARQL endpoint with Ollama models to enable querying and manipulating ontology data using Claude and other AI models. It exposes a set of MCP endpoints for SPARQL operations, model control, and various AI services, empowering you to build rich ontology-driven AI workflows.
You interact with Ontology MCP through an MCP client that can call the available endpoints. Start the MCP server, connect your client to the configured MCP URL, and begin issuing operations to run SPARQL queries, manage Ollama models, and perform AI tasks through OpenAI, Google Gemini, or other supported services. Use the provided environment variables to set API keys and the SPARQL endpoint so your client can access the GraphDB instance and external AI providers.
Prerequisites you need before installation are Node.js and Docker, plus access to a GraphDB instance. Follow these steps to set up and run Ontology MCP.
# Prerequisites
# Install Node.js (LTS) and Docker on your system prior to these steps
# 1. Clone the MCP project
git clone https://github.com/bigdata-coss/agent_mcp.git
cd agent_mcp
# 2. Start GraphDB via Docker
# This uses a docker-compose setup that exposes GraphDB on port 7200
docker-compose up -d
# 3. Install dependencies for the MCP server
npm install
# 4. Build the MCP server for production or testing
npm run build
# 5. Run the MCP server locally (for testing)
node build/index.jsThe server relies on a SPARQL endpoint and API keys for external AI services. Ensure you provide the required environment variables to your MCP runtime so it can access these services. Example variables include SPARQL_ENDPOINT, OPENAI_API_KEY, and GEMINI_API_KEY. If you run the server locally, you may want to pass these variables in your process environment or through a dedicated config mechanism used by your MCP client.
If you need to customize how the MCP server is started in a local development environment, you can use a configuration block like the following, which defines a single MCP connection via a local Node process and passes necessary environment values.
{
"mcpServers": {
"a2a-ontology-mcp": {
"command": "node",
"args": ["E:\\codes\\a2a_mcp\\build"],
"env": {
"SPARQL_ENDPOINT": "http://localhost:7200",
"OPENAI_API_KEY": "your-api-key",
"GEMINI_API_KEY" : "your-api-key"
},
"disabled": false,
"autoApprove": []
}
}
}Executes a SPARQL query against the GraphDB endpoint and returns results to your client.
Runs SPARQL update operations to modify ontology data in GraphDB.
Lists available SPARQL repositories in the GraphDB instance.
Retrieves a list of graphs within a repository.
Fetches metadata about a specific RDF resource.
Launches an Ollama model instance for local inference.
Displays information about a loaded Ollama model.
Downloads a model into Ollama for local use.
Lists available Ollama models.
Removes a model from Ollama.
Generates chat-based completions using an Ollama model.
Checks the status of Ollama containers and models.
Performs chat-style interactions with OpenAI models.
Requests image generation from OpenAI models.
Converts text to speech using OpenAI capabilities.
Transcribes audio to text using OpenAI services.
Creates text embeddings via OpenAI APIs.
Generates text using Gemini models.
Produces chat-style responses with Gemini models.
Lists Gemini-supported models available for use.