home / mcp / ontology mcp server

Ontology MCP Server

Provides SPARQL querying, model control, and OpenAI/Gemini integrations for ontology data.

Installation
Add the following to your MCP client configuration file.

Configuration

View docs
{
  "mcpServers": {
    "bigdata-coss-agent_mcp": {
      "command": "node",
      "args": [
        "E:\\codes\\a2a_mcp\\build"
      ],
      "env": {
        "GEMINI_API_KEY": "YOUR_API_KEY",
        "OPENAI_API_KEY": "YOUR_API_KEY",
        "SPARQL_ENDPOINT": "http://localhost:7200"
      }
    }
  }
}

Ontology MCP connects a GraphDB SPARQL endpoint with Ollama models to enable querying and manipulating ontology data using Claude and other AI models. It exposes a set of MCP endpoints for SPARQL operations, model control, and various AI services, empowering you to build rich ontology-driven AI workflows.

How to use

You interact with Ontology MCP through an MCP client that can call the available endpoints. Start the MCP server, connect your client to the configured MCP URL, and begin issuing operations to run SPARQL queries, manage Ollama models, and perform AI tasks through OpenAI, Google Gemini, or other supported services. Use the provided environment variables to set API keys and the SPARQL endpoint so your client can access the GraphDB instance and external AI providers.

How to install

Prerequisites you need before installation are Node.js and Docker, plus access to a GraphDB instance. Follow these steps to set up and run Ontology MCP.

# Prerequisites
# Install Node.js (LTS) and Docker on your system prior to these steps

# 1. Clone the MCP project
git clone https://github.com/bigdata-coss/agent_mcp.git
cd agent_mcp

# 2. Start GraphDB via Docker
# This uses a docker-compose setup that exposes GraphDB on port 7200
docker-compose up -d

# 3. Install dependencies for the MCP server
npm install

# 4. Build the MCP server for production or testing
npm run build

# 5. Run the MCP server locally (for testing)
node build/index.js

Configuration and usage notes

The server relies on a SPARQL endpoint and API keys for external AI services. Ensure you provide the required environment variables to your MCP runtime so it can access these services. Example variables include SPARQL_ENDPOINT, OPENAI_API_KEY, and GEMINI_API_KEY. If you run the server locally, you may want to pass these variables in your process environment or through a dedicated config mechanism used by your MCP client.

If you need to customize how the MCP server is started in a local development environment, you can use a configuration block like the following, which defines a single MCP connection via a local Node process and passes necessary environment values.

{
  "mcpServers": {
    "a2a-ontology-mcp": {
      "command": "node",
      "args": ["E:\\codes\\a2a_mcp\\build"],
      "env": {
        "SPARQL_ENDPOINT": "http://localhost:7200",
        "OPENAI_API_KEY": "your-api-key",
        "GEMINI_API_KEY" : "your-api-key"
      },
      "disabled": false,
      "autoApprove": []
    }
  }
}

Available tools

mcp_sparql_execute_query

Executes a SPARQL query against the GraphDB endpoint and returns results to your client.

mcp_sparql_update

Runs SPARQL update operations to modify ontology data in GraphDB.

mcp_sparql_list_repositories

Lists available SPARQL repositories in the GraphDB instance.

mcp_sparql_list_graphs

Retrieves a list of graphs within a repository.

mcp_sparql_get_resource_info

Fetches metadata about a specific RDF resource.

mcp_ollama_run

Launches an Ollama model instance for local inference.

mcp_ollama_show

Displays information about a loaded Ollama model.

mcp_ollama_pull

Downloads a model into Ollama for local use.

mcp_ollama_list

Lists available Ollama models.

mcp_ollama_rm

Removes a model from Ollama.

mcp_ollama_chat_completion

Generates chat-based completions using an Ollama model.

mcp_ollama_status

Checks the status of Ollama containers and models.

mcp_openai_chat

Performs chat-style interactions with OpenAI models.

mcp_openai_image

Requests image generation from OpenAI models.

mcp_openai_tts

Converts text to speech using OpenAI capabilities.

mcp_openai_transcribe

Transcribes audio to text using OpenAI services.

mcp_openai_embedding

Creates text embeddings via OpenAI APIs.

mcp_gemini_generate_text

Generates text using Gemini models.

mcp_gemini_chat_completion

Produces chat-style responses with Gemini models.

mcp_gemini_list_models

Lists Gemini-supported models available for use.