home / mcp / snowflake mcp server
Provides Snowflake data access and schema context via MCP tools and memo insights.
Configuration
View docs{
"mcpServers": {
"isaacwasserman-mcp-snowflake-server": {
"command": "uvx",
"args": [
"--python=3.12",
"mcp_snowflake_server",
"--connections-file",
"/path/to/snowflake_connections.toml",
"--connection-name",
"production"
],
"env": {
"SNOWFLAKE_ROLE": "YOUR_ROLE_PLACEHOLDER",
"SNOWFLAKE_USER": "YOUR_USER_PLACEHOLDER",
"SNOWFLAKE_SCHEMA": "YOUR_SCHEMA_PLACEHOLDER",
"SNOWFLAKE_ACCOUNT": "YOUR_ACCOUNT_PLACEHOLDER",
"SNOWFLAKE_DATABASE": "YOUR_DATABASE_PLACEHOLDER",
"SNOWFLAKE_PASSWORD": "YOUR_PASSWORD_PLACEHOLDER",
"SNOWFLAKE_WAREHOUSE": "YOUR_WAREHOUSE_PLACEHOLDER"
}
}
}
}You can query Snowflake data and fetch schema insights through a dedicated MCP server that exposes SQL query capabilities, schema information, and an evolving memo of data insights. This server lets you run read and write operations (if enabled), discover databases, schemas, and tables, and inspect per-table structures, all via a lightweight MCP interface you can connect to from your MCP client.
Connect to the Snowflake MCP server from your MCP client and choose the tools you need for your workflow. Use read_query to execute SELECT statements and retrieve results as structured data. If you need to modify data, enable write capabilities and run write_query to perform INSERT, UPDATE, or DELETE operations. Explore the database structure with list_databases, list_schemas, and list_tables to understand available data sources. Describe tables with describe_table to view column details including names, types, nullability, defaults, and comments. Keep an eye on the memo resource by using append_insight to add new data insights; this updates memo://insights automatically.
Prerequisites: you need Node.js and Python available on your system. You also need the UV runtime to run MCP servers locally and a functioning Snowflake account with credentials. Follow the concrete steps below to install and prepare the Snowflake MCP server for use.
# Prerequisites
node -v # ensure Node.js is installed
python3 -V # ensure Python is installed
# Install UV runtime (required to run MCP servers locally)
curl -LsSf https://astral.sh/uv/install.sh | sh
# Optional: install MCP client helpers if you plan to test via a client
npm install -g @mcp/cli # if such a CLI exists for your environment
```} ,{To run the MCP server locally via UVX, use a configuration that specifies the UVX command, the MCP server package, and your Snowflake connection details. Two common configuration examples are shown here. Use the production or staging connection name that matches your Snowflake setup and reference the appropriate credentials.
"mcpServers": {
"snowflake_prod": {
"command": "uvx",
"args": [
"--python=3.12",
"mcp_snowflake_server",
"--connections-file", "/path/to/snowflake_connections.toml",
"--connection-name", "production"
// Optional: "--allow_write" to enable write operations
// Optional: "--log_dir", "/absolute/path/to/logs"
// Optional: "--log_level", "DEBUG"/"INFO"/"WARNING"/"ERROR"/"CRITICAL"
// Optional: "--exclude_tools", "tool_name", ["other_tool"]
]
},
"snowflake_stg": {
"command": "uvx",
"args": [
"--python=3.12",
"mcp_snowflake_server",
"--connections-file", "/path/to/snowflake_connections.toml",
"--connection-name", "staging"
]
}
}By default, write operations are disabled. Enable them explicitly with the --allow_write option when you need to modify data. If you encounter connection issues, verify your Snowflake credentials, network access to Snowflake, and the correctness of the connections file path and connection-name. For performance or debugging, adjust log levels and log directories as needed.
Execute SELECT queries to read data from Snowflake and return results as an array of objects.
Execute INSERT, UPDATE, or DELETE queries. Requires explicit enablement with --allow-write.
Create new tables in Snowflake. Requires explicit enablement with --allow-write.
List all databases in the Snowflake instance.
List all schemas within a specified database.
List all tables within a specified database and schema.
Return column definitions for a specific table including names, types, nullability, defaults, and comments.
Add a new data insight to the memo resource and trigger an update to memo://insights.