home / mcp / tideways mcp server
Provides Tideways performance data and conversational insights for PHP apps via MCP with AI assistants.
Configuration
View docs{
"mcpServers": {
"abuhamza-tideways-mcp-server": {
"command": "npx",
"args": [
"tideways-mcp-server"
],
"env": {
"LOG_LEVEL": "info",
"TIDEWAYS_ORG": "YOUR_ORG",
"TIDEWAYS_TOKEN": "YOUR_TOKEN",
"TIDEWAYS_PROJECT": "YOUR_PROJECT",
"TIDEWAYS_BASE_URL": "https://app.tideways.io/apps/api",
"TIDEWAYS_MAX_RETRIES": "3",
"TIDEWAYS_REQUEST_TIMEOUT": "30000"
}
}
}
}You can query Tideways performance data with an MCP server that exposes Tideways metrics and insights to AI assistants. This server translates performance data into natural-language insights, supports real-time queries, and helps you analyze errors, traces, and optimization opportunities for your PHP applications.
Once you configure the MCP server in your AI assistant’s MCP settings, you can ask natural-language questions like “What’s the current performance of my application?” or “Show me the slowest transactions right now.” You can also request deeper analysis such as tracing a specific endpoint, identifying bottlenecks, or getting optimization recommendations. The server uses a stdio transport to communicate with compatible AI assistants, returning structured data that your assistant can render in context.
Prerequisites you need before you start:
- Node.js and npm installed on your machine.
Step 1: Install the MCP server globally (optional but convenient)
npm install -g tideways-mcp-serverStep 2: Or use npx to run without a global install (recommended for testing or quick starts)
npx tideways-mcp-serverStep 3: Prepare environment variables for your Tideways account
TIDEWAYS_TOKEN=YOUR_TOKEN
TIDEWAYS_ORG=YOUR_ORG
TIDEWAYS_PROJECT=YOUR_PROJECTStep 4: Start the server with your MCP client configuration
TIDEWAYS_TOKEN=YOUR_TOKEN TIDEWAYS_ORG=YOUR_ORG TIDEWAYS_PROJECT=YOUR_PROJECT npx tideways-mcp-serverThis MCP server is designed to work with AI assistants that support MCP via stdio. You provide the connection details in the assistant’s MCP configuration file and set the required environment variables when starting the server. The server exposes tools that return raw JSON data, which your AI assistant can present in conversational form.
Keep sensitive credentials secure by loading them from environment variables and never hard-coding tokens in source files. The server redacts authorization headers in logs and validates inputs for all MCP interactions.
If you encounter authentication errors, verify that TIDEWAYS_TOKEN, TIDEWAYS_ORG, and TIDEWAYS_PROJECT are correct and have the required scopes. If you hit rate limits, adjust your query frequency or rely on the server’s retry logic. For MCP connection issues, ensure your assistant’s MCP configuration is correct and that the server process is running with the proper environment variables.
After configuration, ask your AI assistant questions about current performance, recent errors, or trace-level bottlenecks. The server provides structured JSON suitable for direct interpretation by the assistant, enabling natural-language reporting and actionable recommendations.
Retrieve aggregate performance metrics and system-wide statistics.
Retrieve time-series performance summary data in 15-minute intervals for trend analysis.
Retrieve and analyze recent errors, exceptions, and performance issues.
Analyze individual trace samples for bottlenecks and optimization recommendations.
Retrieve historical performance data for specific dates with configurable granularity.