home / mcp / datamaker mcp server
Provides synthetic data generation via DataMaker templates, template/connection management, large data handling with S3, and Python script execution through MCP.
Configuration
View docs{
"mcpServers": {
"automators-com-datamaker-mcp": {
"command": "npx",
"args": [
"-y",
"@automators/datamaker-mcp"
],
"env": {
"DATAMAKER_API_KEY": "your-datamaker-api-key"
}
}
}
}You can run the DataMaker MCP Server to connect DataMaker’s data generation capabilities with the Model Context Protocol, enabling AI models to generate synthetic data, manage templates and connections, push data to DataMaker, handle large datasets with S3, and run Python scripts on demand from within the MCP workflow.
Use an MCP client to connect to the DataMaker MCP Server. You will be able to generate synthetic data from DataMaker templates, fetch and manage templates and connections, push results to DataMaker, and run dynamic Python scripts through dedicated tools. When you fetch many endpoints, the server can store the full dataset in your S3 bucket and return a concise summary with a secure link to the full dataset. You can invoke Python scripts by providing code and a filename; the server uploads the script to S3, runs it with the DataMaker runner, and returns the output.
{
"mcpServers": {
"datamaker": {
"command": "npx",
"args": ["-y", "@automators/datamaker-mcp"],
"env": {
"DATAMAKER_API_KEY": "your-datamaker-api-key"
}
}
}
}Prerequisites include Node.js (LTS recommended) and pnpm (v10.5.2 or later). You also need a DataMaker account with API access and an AWS S3 bucket with credentials if you plan to store large datasets.
Development steps include configuring the MCP server, starting the development environment, and using the MCP Inspector to debug and verify behavior.
Fetch DataMaker endpoints. If more than 10 endpoints are returned, the server stores the full dataset in your S3 bucket, returns a summary with the first five endpoints, and provides a secure 24-hour link to view the complete dataset.
Upload a Python script to S3 and execute it via the DataMaker runner. You provide the script code and a filename; the server handles upload, execution, and returns the output.