home / skills / jeremylongshore / claude-code-plugins-plus-skills / network-latency-tester

This skill helps you automate and optimize network latency testing by providing step-by-step guidance, configurations, and validation aligned with performance

npx playbooks add skill jeremylongshore/claude-code-plugins-plus-skills --skill network-latency-tester

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
2.2 KB
---
name: "network-latency-tester"
description: |
  Test network latency tester operations. Auto-activating skill for Performance Testing.
  Triggers on: network latency tester, network latency tester
  Part of the Performance Testing skill category. Use when writing or running tests. Trigger with phrases like "network latency tester", "network tester", "network".
allowed-tools: "Read, Write, Edit, Bash(cmd:*)"
version: 1.0.0
license: MIT
author: "Jeremy Longshore <[email protected]>"
---

# Network Latency Tester

## Overview

This skill provides automated assistance for network latency tester tasks within the Performance Testing domain.

## When to Use

This skill activates automatically when you:
- Mention "network latency tester" in your request
- Ask about network latency tester patterns or best practices
- Need help with performance testing skills covering load testing, stress testing, benchmarking, and performance monitoring.

## Instructions

1. Provides step-by-step guidance for network latency tester
2. Follows industry best practices and patterns
3. Generates production-ready code and configurations
4. Validates outputs against common standards

## Examples

**Example: Basic Usage**
Request: "Help me with network latency tester"
Result: Provides step-by-step guidance and generates appropriate configurations


## Prerequisites

- Relevant development environment configured
- Access to necessary tools and services
- Basic understanding of performance testing concepts


## Output

- Generated configurations and code
- Best practice recommendations
- Validation results


## Error Handling

| Error | Cause | Solution |
|-------|-------|----------|
| Configuration invalid | Missing required fields | Check documentation for required parameters |
| Tool not found | Dependency not installed | Install required tools per prerequisites |
| Permission denied | Insufficient access | Verify credentials and permissions |


## Resources

- Official documentation for related tools
- Best practices guides
- Community examples and tutorials

## Related Skills

Part of the **Performance Testing** skill category.
Tags: performance, load-testing, k6, jmeter, benchmarking

Overview

This skill automates guidance and artifacts for network latency tester tasks in performance testing. It helps write tests, generate configurations, and validate results so you can measure latency, spot regressions, and tune networks. It is designed to integrate with common tools and produce production-ready examples.

How this skill works

The skill inspects your request for network latency testing intent and auto-activates when you mention keywords like "network latency tester" or "network tester." It then provides step-by-step guidance, generates test scripts and configuration files, and validates outputs against common standards. It also suggests diagnostics and remediation steps for common errors.

When to use it

  • Designing or running latency-focused performance tests
  • Generating k6, JMeter, or custom Python test scripts for latency measurements
  • Troubleshooting unexpected latency or jitter in CI/CD pipelines
  • Creating repeatable benchmark scenarios for network changes
  • Validating test configurations and interpreting latency metrics

Best practices

  • Define clear SLAs and latency percentiles (p50, p95, p99) before testing
  • Isolate variables: test network changes independently of application load
  • Use multiple geographic locations and realistic client distributions
  • Run warm-up phases to avoid cold-start artifacts and stabilize measurements
  • Automate test runs in CI with artifacted metrics and baseline comparisons

Example use cases

  • Generate a k6 script that measures p95 request latency across three regions
  • Create a JMeter test plan to emulate 1,000 concurrent clients with latency reporting
  • Validate a network policy change by comparing baseline and post-change latency histograms
  • Produce a Python script that runs synthetic ping and HTTP latency checks and stores results
  • Automate CI job that fails on latency regressions beyond a defined threshold

FAQ

What inputs do you need to generate a test?

Provide target endpoints, desired concurrency, test duration, regions, and SLA thresholds; optional auth and headers.

Which tools are supported for generated artifacts?

Common formats include k6, JMeter, and Python scripts; outputs can be adapted to your observability stack.