home / skills / jeremylongshore / claude-code-plugins-plus-skills / running-integration-tests

This skill helps you automate integration testing by preparing environments, running tests, and generating actionable reports across components.

npx playbooks add skill jeremylongshore/claude-code-plugins-plus-skills --skill running-integration-tests

Review the files below or copy the command above to add this skill to your agents.

Files (4)
SKILL.md
3.5 KB
---
name: running-integration-tests
description: |
  Execute integration tests validating component interactions and system integration.
  Use when performing specialized testing.
  Trigger with phrases like "run integration tests", "test integration", or "validate component interactions".
  
allowed-tools: Read, Write, Edit, Grep, Glob, Bash(test:integration-*)
version: 1.0.0
author: Jeremy Longshore <[email protected]>
license: MIT
---
# Integration Test Runner

This skill provides automated assistance for integration test runner tasks.

## Prerequisites

Before using this skill, ensure you have:
- Test environment configured and accessible
- Required testing tools and frameworks installed
- Test data and fixtures prepared
- Appropriate permissions for test execution
- Network connectivity if testing external services

## Instructions

### Step 1: Prepare Test Environment
Set up the testing context:
1. Use Read tool to examine configuration from {baseDir}/config/
2. Validate test prerequisites are met
3. Initialize test framework and load dependencies
4. Configure test parameters and thresholds

### Step 2: Execute Tests
Run the test suite:
1. Use Bash(test:integration-*) to invoke test framework
2. Monitor test execution progress
3. Capture test outputs and metrics
4. Handle test failures and error conditions

### Step 3: Analyze Results
Process test outcomes:
- Identify passed and failed tests
- Calculate success rate and performance metrics
- Detect patterns in failures
- Generate insights for improvement

### Step 4: Generate Report
Document findings in {baseDir}/test-reports/:
- Test execution summary
- Detailed failure analysis
- Performance benchmarks
- Recommendations for fixes

## Output

The skill generates comprehensive test results:

### Test Summary
- Total tests executed
- Pass/fail counts and percentage
- Execution time metrics
- Resource utilization stats

### Detailed Results
Each test includes:
- Test name and identifier
- Execution status (pass/fail/skip)
- Actual vs. expected outcomes
- Error messages and stack traces

### Metrics and Analysis
- Code coverage percentages
- Performance benchmarks
- Trend analysis across runs
- Quality gate compliance status

## Error Handling

Common issues and solutions:

**Environment Setup Failures**
- Error: Test environment not properly configured
- Solution: Verify configuration files; check environment variables; ensure dependencies are installed

**Test Execution Timeouts**
- Error: Tests exceeded maximum execution time
- Solution: Increase timeout thresholds; optimize slow tests; parallelize test execution

**Resource Exhaustion**
- Error: Insufficient memory or disk space during testing
- Solution: Clean up temporary files; reduce concurrent test workers; increase resource allocation

**Dependency Issues**
- Error: Required services or databases unavailable
- Solution: Verify service health; check network connectivity; use mocks if services are down

## Resources

### Testing Tools
- Industry-standard testing frameworks for your language/platform
- CI/CD integration guides and plugins
- Test automation best practices documentation

### Best Practices
- Maintain test isolation and independence
- Use meaningful test names and descriptions
- Keep tests fast and focused
- Implement proper setup and teardown
- Version control test artifacts
- Run tests in CI/CD pipelines

## Overview


This skill provides automated assistance for integration test runner tasks.
This skill provides automated assistance for the described functionality.

## Examples

Example usage patterns will be demonstrated in context.

Overview

This skill runs and manages integration tests that validate component interactions and system-level behavior. It automates environment checks, test execution, result aggregation, and report generation. Use it to verify that services, databases, and external integrations work together as expected before release.

How this skill works

The skill inspects test environment configuration, verifies prerequisites, and initializes the chosen test framework. It invokes integration test suites (e.g., via bash test runners), monitors execution, captures logs and metrics, and aggregates pass/fail outcomes. Finally it analyzes results, computes success rates and performance metrics, and writes detailed reports to the configured test-reports directory.

When to use it

  • Before merging a release that spans multiple services or components
  • When validating end-to-end workflows after infrastructure changes
  • During CI/CD pipelines to enforce integration quality gates
  • After upgrading shared libraries, databases, or external APIs
  • When debugging intermittent failures that involve multiple systems

Best practices

  • Ensure the test environment mirrors production for realistic interactions
  • Keep tests isolated, deterministic, and fast; mock external flakier dependencies
  • Prepare clear setup/teardown fixtures and required test data
  • Run tests in parallel where safe and increase timeouts for slow components
  • Capture detailed logs, metrics, and stack traces for failed tests

Example use cases

  • Run integration tests after deploying a new microservice to verify API contracts with downstream services
  • Validate database migration scripts by executing end-to-end tests against a staging database
  • Trigger integration test suites in CI to block merges when the integration success rate drops
  • Diagnose cascading failures by rerunning targeted integration tests with increased logging
  • Generate a comprehensive test report showing pass/fail counts, execution time, and resource usage

FAQ

What prerequisites are required before running integration tests?

Ensure test environment is configured and reachable, dependencies and test frameworks installed, test data prepared, and network access to required services is available.

Where are test results and reports stored?

Reports and detailed results are written to the configured test-reports directory, including summaries, failure analysis, and performance metrics.