home / skills / jeremylongshore / claude-code-plugins-plus-skills / flaky-test-detector

This skill helps you detect flaky tests and implement automated patterns, generating configurations and guidance for reliable test automation.

npx playbooks add skill jeremylongshore/claude-code-plugins-plus-skills --skill flaky-test-detector

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
2.1 KB
---
name: "flaky-test-detector"
description: |
  Detect flaky test detector operations. Auto-activating skill for Test Automation.
  Triggers on: flaky test detector, flaky test detector
  Part of the Test Automation skill category. Use when writing or running tests. Trigger with phrases like "flaky test detector", "flaky detector", "flaky".
allowed-tools: "Read, Write, Edit, Bash(cmd:*), Grep"
version: 1.0.0
license: MIT
author: "Jeremy Longshore <[email protected]>"
---

# Flaky Test Detector

## Overview

This skill provides automated assistance for flaky test detector tasks within the Test Automation domain.

## When to Use

This skill activates automatically when you:
- Mention "flaky test detector" in your request
- Ask about flaky test detector patterns or best practices
- Need help with test automation skills covering unit testing, integration testing, mocking, and test framework configuration.

## Instructions

1. Provides step-by-step guidance for flaky test detector
2. Follows industry best practices and patterns
3. Generates production-ready code and configurations
4. Validates outputs against common standards

## Examples

**Example: Basic Usage**
Request: "Help me with flaky test detector"
Result: Provides step-by-step guidance and generates appropriate configurations


## Prerequisites

- Relevant development environment configured
- Access to necessary tools and services
- Basic understanding of test automation concepts


## Output

- Generated configurations and code
- Best practice recommendations
- Validation results


## Error Handling

| Error | Cause | Solution |
|-------|-------|----------|
| Configuration invalid | Missing required fields | Check documentation for required parameters |
| Tool not found | Dependency not installed | Install required tools per prerequisites |
| Permission denied | Insufficient access | Verify credentials and permissions |


## Resources

- Official documentation for related tools
- Best practices guides
- Community examples and tutorials

## Related Skills

Part of the **Test Automation** skill category.
Tags: testing, jest, pytest, mocking, tdd

Overview

This skill detects and helps remediate flaky tests in automated test suites. It provides actionable diagnostics, root-cause hints, and generates configuration or code fixes to reduce non-deterministic failures. Use it during test development, CI troubleshooting, or when triaging intermittent test failures.

How this skill works

The skill analyzes test runs, failure patterns, and environment signals to identify flakiness symptoms such as order-dependence, timing/race conditions, external-service instability, or resource leaks. It suggests reproducible isolation steps, targeted assertions, retries, mocking, or fixture changes and can produce sample patch code and CI configuration adjustments. It validates suggested changes against common testing standards and highlights residual risk factors.

When to use it

  • You notice intermittent test failures in CI or local runs
  • You want to reduce noise from non-deterministic tests before a release
  • Writing or refactoring tests and you want to enforce stability patterns
  • Configuring CI retries, parallelism, or test isolation settings
  • Investigating sudden increases in test failure rates

Best practices

  • Reproduce flaky failures locally or in a controlled environment before broad changes
  • Prefer isolation and deterministic fixtures over excessive retries
  • Mock or stub external dependencies when timing or network issues cause flakiness
  • Add focused instrumentation and logs to capture timing, resource, and order information
  • Run suspected flaky tests with varied ordering and concurrency to surface hidden dependencies

Example use cases

  • Diagnose an intermittent integration test that fails under heavy CI load and get a suggested fix (timeouts, mocking, resource limits)
  • Generate example pytest/jest code changes to improve isolation and remove shared state between tests
  • Recommend CI configuration adjustments: parallelism limits, test sharding, conditional retries, and artifacts to capture on failure
  • Create a minimal reproducible test case and a suggested patch to replace flaky timing assertions with event-based waits
  • Produce a checklist for triaging flaky tests including environment capture, seed-based reordering, and dependency stubbing

FAQ

Can this skill automatically fix flaky tests?

It generates targeted code and configuration suggestions and can produce ready-to-review patches, but human review and testing are recommended before merging.

How does it detect flakiness in large test suites?

It looks for failure patterns across multiple runs, correlates timing and environment metadata, and flags tests with inconsistent pass/fail histories or environment-sensitive behavior.