Provides DNA sequence generation, scoring, embedding, and variant-effect analysis via multiple execution modes.
Configuration
View docs{
"mcpServers": {
"bio-mcp-bio-mcp-evo2": {
"command": "python",
"args": [
"-m",
"src.server"
],
"env": {
"BIO_MCP_EVO2_MODEL_SIZE": "7b",
"BIO_MCP_EVO2_CUDA_DEVICE": "0",
"BIO_MCP_EVO2_NIM_API_KEY": "YOUR_API_KEY",
"BIO_MCP_EVO2_EXECUTION_MODE": "api"
}
}
}
}You can run the evo2 MCP server to generate, score, embed, and predict variant effects for DNA sequences using a genomic foundation model. It supports local GPU execution, cluster submission, containerized runtimes, or remote API access, giving you flexibility to fit your hardware and workflow.
Choose your execution mode to run the evo2 MCP server and then interact with it via your MCP client. The server exposes four primary capabilities you’ll use in practice: generate DNA sequences from prompts, score sequences or obtain raw logits, extract learned representations from sequences, and predict the effects of mutations. Pick an execution mode that matches your environment and follow with the standard client calls to request these tools by name.
Prerequisites: you need Python installed and a suitable execution environment for your chosen mode. Optionally you can use Docker or Singularity for containerized execution, or a SLURM-based cluster if you are using SBATCH mode.
# Local development setup
git clone https://github.com/bio-mcp/bio-mcp-evo2.git
cd bio-mcp-evo2
pip install -e .[dev]Use container-based execution for isolated environments or HPC clusters to leverage SLURM or container runtimes.
# Docker container build and run (GPU-enabled)
docker build -t bio-mcp-evo2 .
docker run --gpus all bio-mcp-evo2
# Singularity container build and run (HPC)
singularity build --fakeroot evo2.sif Singularity.def
singularity run --nv evo2.sifIf you enable API mode, preserve your API key securely. When using local or SBATCH modes, ensure you only expose ports and endpoints to trusted clients and follow your organization’s security guidelines for GPU usage and data handling.
The server reads a set of environment variables to select execution mode, model size, CUDA device, and optional SBATCH or API settings. Use these when starting the server to tailor behavior to your environment.
Common workflows include starting the server in your chosen mode and then issuing calls to evo2_generate, evo2_score, evo2_embed, and evo2_variant_effect through your MCP client. For example, in API mode you would configure your client with the hosted API key and start the server to access the hosted services without local GPU requirements.
Generate DNA sequences from a given prompt with tunable sampling controls.
Compute sequence perplexity or obtain raw model logits for a sequence.
Extract learned representations from sequences at specified model layers.
Predict the effect of mutations on biological function given a reference and a variant sequence.