home / skills / sounder25 / google-antigravity-skills-library / 01_run_eels_tests

01_run_eels_tests skill

/01_run_eels_tests

This skill automates running the Ethereum EELS tests against a local EVM, handling venv setup, execution, and results parsing.

npx playbooks add skill sounder25/google-antigravity-skills-library --skill 01_run_eels_tests

Review the files below or copy the command above to add this skill to your agents.

Files (3)
SKILL.md
2.2 KB
---
name: Run EELS Test Suite
description: Automates running the Ethereum Execution Layer Specification (EELS) tests against a local EVM implementation. Handles venv setup, execution, and result parsing.
version: 1.0.0
author: Antigravity Skills Library
created: 2026-01-15
leverage_score: 5/5
---

# SKILL-001: Run EELS Test Suite

## Overview

Automates the execution of EELS compliance tests. This skill handles the complexity of setting up the Python environment, installing dependencies, invoking `pytest` against a target EVM binary, and parsing the results into standardized reports.

## Trigger Phrases

- `run eels tests`
- `eels compliance check`
- `verify evm implementation`
- `run execution specs`

## Inputs

| Parameter | Type | Required | Default | Description |
|-----------|------|----------|---------|-------------|
| `--evm-binary` | string | Yes | - | Path to the EVM executable (e.g., `ELR.CLI.exe`) |
| `--test-filter` | string | No | - | Optional pytest filter (e.g., `-k "add or sub"`) |
| `--specs-path` | string | No | Auto-detect | Path to `execution-specs` repo |
| `--output-dir` | string | No | `./.forensics` | Directory to save reports |
| `--json` | switch | No | False | Return raw JSON output only |

## Outputs

1. **Console Output:** Real-time test execution progress.
2. **Report File:** `EELS_TEST_RESULTS_<timestamp>.md` with pass/fail summary and details.
3. **JSON Results:** `eels_results.json` (if `--json` or requested).

## Preconditions

1. **SKILL-000** must have been run (checked via `WORKSPACE_PROFILE.json`).
2. `execution-specs` repo must exist (usually `C:\projects\Scrutor\execution-specs` or similar).
3. Python 3.10+ installed and accessible.
4. Target EVM binary must be built and executable.

## Safety/QA Checks

1. **Binary Verification:** Checks if `--evm-binary` exists and runs (version check).
2. **Repo State:** Checks if `execution-specs` is clean/valid.
3. **Venv Isolation:** Uses a local `.venv` to avoid system pollution.

## Implementation

See `run_eels_tests.ps1`.

## Integration

```powershell
# Example integration
.\skills\01_run_eels_tests\run_eels_tests.ps1 -EvmBinary ".\bin\Debug\net8.0\ELR.CLI.exe" -TestFilter "tests/shanghai/eip3855_push0"
```

Overview

This skill automates running the Ethereum Execution Layer Specification (EELS) test suite against a local EVM implementation. It sets up an isolated Python virtual environment, installs test dependencies, invokes pytest against a specified EVM binary, and produces human-readable and machine-readable results. Use it to verify compliance, produce repeatable test runs, and collect diagnostics for failures.

How this skill works

The script validates preconditions (workspace profile, execution-specs repo, Python availability, and the target EVM binary), creates or reuses a local .venv, installs the EELS test requirements, and launches pytest with the provided filters against the target executable. It streams console progress, captures pytest output, parses results into a timestamped Markdown summary and an optional JSON results file, and returns a nonzero exit code on test failures. Safety checks include binary verification, repo state checks, and venv isolation.

When to use it

  • Before a release to verify Execution Layer compliance
  • After building an EVM binary to run full or targeted spec tests
  • During CI job debugging when spec regressions appear
  • When collecting forensic test results for audits or bug reports
  • To generate standardized reports for compliance tracking

Best practices

  • Ensure the execution-specs repository is present and up-to-date before running
  • Build the target EVM binary and verify it runs a version check prior to tests
  • Use --test-filter to scope long test runs to relevant features
  • Keep .venv local to the skill directory to avoid contaminating system Python
  • Collect both Markdown and JSON outputs for human review and automated pipelines

Example use cases

  • Run full EELS compliance suite on a nightly build and archive reports
  • Execute targeted spec tests (e.g., Shanghai EIPs) after implementing a feature
  • Automate pre-release compliance checks in a CI pipeline using the script exit code
  • Reproduce a failing spec locally and collect parsed JSON for bug triage
  • Create audit-ready Markdown reports for governance or client review

FAQ

What minimum prerequisites are required?

Python 3.10+ installed, the execution-specs repo available locally, and a built, executable EVM binary path supplied with --evm-binary.

How do I run only specific tests?

Pass a pytest filter via --test-filter (for example -k "add or sub") to limit the executed tests and shorten run time.