home / mcp / chat mcp server

Chat MCP Server

Provides a unified interface to test and run multiple MCP-backed LLM servers via HTTP endpoints and local stdio-based runtimes.

Installation
Add the following to your MCP client configuration file.

Configuration

View docs
{
  "mcpServers": {
    "ai-ql-chat-mcp": {
      "url": "https://api.aiql.com/mcp"
    }
  }
}

You can use this MCP-based desktop experience to connect to multiple large language models (LLMs) through a unified MCP interface. It lets you test and run different backends from a single client, manage multiple connections, and adapt the UI for web or desktop use, all while keeping MCP principles clear and approachable.

How to use

Launch the client and configure multiple MCP servers to connect to different LLM backends. You can switch between servers and presets to compare responses, prompts, and configurations. Load a configuration that combines remote HTTP MCP servers with local, filesystem-based MCP servers for testing and development. Use the multi-client setup to run several backends side by side and evaluate performance, latency, and feature support across providers.

How to install

Prerequisites you need before installing: Node.js and npm. You should also have access to the internet to fetch dependencies.

# 1) Check Node.js and npm versions
node -v
npm -v

# 2) Install dependencies for the project
npm install

# 3) Start the application (or build first if required by your setup)
npm start

Configuration and usage notes

Set up MCP server connections by defining HTTP (remote) servers and any local or stdio-based servers you want to run. The HTTP servers provide remote MCP endpoints, while stdio servers run locally using node-based MCP runtimes. Include any required environment variables for authentication or API access.

{
  "mcpServers": {
    "gtp": {
      "type": "http",
      "name": "gtp",
      "url": "https://api.aiql.com/mcp",
      "args": []
    },
    "qwen": {
      "type": "http",
      "name": "qwen",
      "url": "https://dashscope.aliyuncs.com/compatible-mode",
      "args": []
    },
    "deepinfra": {
      "type": "http",
      "name": "deepinfra",
      "url": "https://api.deepinfra.com",
      "args": []
    },
    "filesystem": {
      "type": "stdio",
      "name": "filesystem",
      "command": "node",
      "args": [
        "node_modules/@modelcontextprotocol/server-filesystem/dist/index.js",
        "D:/Github/mcp-test"
      ],
      "env": [
        {"name": "apiKey", "value": ""}
      ]
    }
  }
}

Troubleshooting and tips

If you encounter issues starting local MCP servers, verify that the provided paths and commands are correct and absolute where possible. Ensure environment variables needed by remote APIs are present and correctly named in your environment. If a start process stalls, check network access to HTTP MCP endpoints and confirm that any required API keys are supplied in your environment or in the configuration.

Security and best practices

Use strong API keys, limit access to your local MCP servers, and keep dependencies up to date. When testing multiple backends, isolate test data and prompts to prevent cross-provider leakage.

Notes and examples

The configuration supports a mix of HTTP MCP endpoints and local stdio servers. You can modify the JSON to add or remove servers as needed and reuse preset models across backends.

Available tools

Dynamic LLM Config

Configurable selection of LLM backends and model presets to test across MCP-enabled servers.

Multi-Client Manager

Configure and manage multiple MCP client connections to different servers from a single interface.

UI Adaptability

Extractable UI components for web and desktop use to maintain consistent interaction logic.

MCP Testing Toolkit

Tools to evaluate prompts, latency, and feature support across backends.