home / mcp / xiyan mcp server

XiYan MCP Server

A Model Context Protocol (MCP) server that enables natural language queries to databases

Installation
Add the following to your MCP client configuration file.

Configuration

View docs
{
  "mcpServers": {
    "xgenerationlab-xiyan_mcp_server": {
      "command": "python",
      "args": [
        "-m",
        "xiyan_mcp_server"
      ],
      "env": {
        "YML": "path/to/config.yml"
      }
    }
  }
}

You can run XiYan MCP Server to enable natural language queries against your databases. This server bridges language models with SQL databases, letting you fetch data and read table contents through intuitive prompts. It supports multiple LLM backends, including local and remote models, and offers secure local operation as an option.

How to use

Start the MCP server in a local, standard way and connect your MCP client to the provided command. You can run the server to accept queries and return SQL results. Use a client to send natural language questions like “Show me customers from last month” and the server will translate to SQL, execute it against your configured database, and return the results.

How to install

Prerequisites: ensure you have Python 3.11 or newer installed.

Install the server from PyPI to get the latest release.

Run the following command in your terminal to install the server:

Install the server from PyPI

pip install xiyan-mcp-server

Install from source

If you prefer installing from source, clone the repository and install from there.

Launch the server

Prepare a YAML configuration file (for example, config.yml) with your MCP settings and database connection details, then run the server using the Python module. The following command starts the server and serves the MCP endpoint locally via standard input/output.

Start the server with standard I/O transport

YML=path/to/config.yml python -m xiyan_mcp_server

Start the server with SSE transport

If you prefer server-sent events (SSE) transport, configure the transport as sse in the config and run the same command. The server will expose the SSE endpoint at the configured port.

Configuration

You need a YAML configuration file to configure the server. A default configuration example is shown below. Update the values to match your environment and database details.

Configuration example

mcp:
  transport: "stdio"          # or "sse"
model:
  name: "XGenerationLab/XiYanSQL-QwenCoder-32B-2412"
  key: ""
  url: "https://api-inference.modelscope.cn/v1/"
database:
  host: "localhost"
  port: 3306
  user: "root"
  password: ""
  database: ""

MCP configuration details

You can set the transport protocol to stdio or sse in the MCP section. The following options are supported and their defaults are shown here.

MCP transport options

mcp:
  transport: "stdio"  # only, for simple local development

SSE configuration

mcp:
  transport: "sse"
  port: 8000
  log_level: "INFO"

LLM configuration

Configure the large language model (LLM) used to translate natural language to SQL. You may use a general LLM, a state-of-the-art model via Modelscope or Dashscope, or run a local LLM. Each option requires its own settings, including model name, API key, and endpoint URL.

General LLMs

model:
  name: "gpt-3.5-turbo"
  key: "YOUR KEY"
  url: "https://api.openai.com/v1"
database:

Text-to-SQL SOTA model

XiYanSQL-QwenCoder-32B-2412 is a SOTA text-to-SQL model. You can use it via Modelscope or Dashscope, with the appropriate API keys.

Model options overview

Two main paths exist: remote general LLMs, specialized SOTA models via Modelscope or Dashscope, and an optional local LLM. Each path requires different configuration fields, including model name, API key, and endpoint URL.

Database configuration

Configure the database connection details. The MCP server currently supports MySQL and PostgreSQL. Provide host, port, user, password, and database name.

MySQL example

database:
  host: "localhost"
  port: 3306
  user: "root"
  password: ""
  database: ""

PostgreSQL example

database:
  dialect: "postgresql"
  host: "localhost"
  port: 5432
  user: ""
  password: ""
  database: ""

Launch notes and client setup

When you launch the server, you can choose to connect clients like Claude Desktop, Cline, Goose, Cursor, or Witsy by configuring each client with the MCP server details. Follow each client’s setup instructions to point at the MCP’s URL or the stdio command with your YAML config.

Tools you can use with MCP

The MCP server exposes tools that help you query the database and sample data. The key tools are get_data for translating natural language into SQL and data-resource endpoints like {dialect}://{table_name} to preview sample data and list database names.

Troubleshooting and notes

If you experience issues starting the server, verify that the YAML file path is correct and that the Python environment has the necessary dependencies installed. Ensure the database credentials are correct and that the database is reachable from the server host.

Security and best practices

Use the local (stdio) transport for higher security by avoiding external API keys and keeping model inference on your own infrastructure when possible. If you use remote models, safeguard API keys and manage access to your MCP server.

Related information

If you want more context on XiYanSQL and related models, explore the XiYanSQL-QwenCoder family and Modelscope/Dashscope integrations for performance and deployment details.

Notes on usage patterns

You can list available tables as resources and read table contents to help your models reference the database structure during query construction. Use natural language prompts to retrieve data and inspect your schema as needed.

Available tools

get_data

Translates natural language questions into SQL queries using the internal model and retrieves results from the configured database.

dialect_resource

Accesses sample data or lists databases using the dialect-specific resource endpoints like {dialect}://{table_name} to aid model reference during query construction.