home / skills / sammcj / agentic-coding / go-testing

go-testing skill

/Skills/go-testing

This skill helps you write robust Go tests by applying current best practices for organization, tables, concurrency, and fixtures.

npx playbooks add skill sammcj/agentic-coding --skill go-testing

Review the files below or copy the command above to add this skill to your agents.

Files (2)
SKILL.md
3.9 KB
---
name: writing-go-tests
description: Applies current Go testing best practices. Use when writing or modifying Go test files or advising on Go testing strategies.
---

# Go Testing Best Practices

This skill provides actionable testing guidelines. For detailed implementation patterns, code examples, rationale, and production system references, consult `go-testing-best-practices.md`.

## When Working with Go Tests

**Always apply these current best practices:**

### 1. Test Organisation
- Place test files alongside source code using `*_test.go` naming
- Use internal tests (same package) for unit testing unexported functions
- Use external tests (`package foo_test`) for integration testing and examples
- Split test files by functionality when they exceed 500-800 lines (e.g., `handler_auth_test.go`, `handler_validation_test.go`)

### 2. Table-Driven Testing
- **Prefer map-based tables over slice-based** for automatic unique test names
- Use descriptive test case names that appear in failure output
- See detailed guide for complete pattern and examples

### 3. Concurrent Testing
- **Use `testing/synctest` for deterministic concurrent testing** (Go 1.24+)
- This eliminates flaky time-based tests and runs in microseconds instead of seconds
- For traditional parallel tests, always call `t.Parallel()` first in test functions

### 4. Assertions and Comparisons
- Use `cmp.Diff()` from `google/go-cmp` for complex comparisons
- Standard library is sufficient for simple tests
- Testify is the dominant third-party framework when richer assertions are needed

### 5. Mocking and Test Doubles
- **Favour integration testing with real dependencies** over heavy mocking
- Use Testcontainers for database/service integration tests
- When mocking is necessary, prefer simple function-based test doubles over code generation
- Use interface-based design ("accept interfaces, return structs")

### 6. Coverage Targets
- Aim for **70-80% coverage as a practical target**
- Focus on meaningful tests over percentage metrics
- Use `go test -cover` and `go tool cover -html` for analysis

### 7. Test Fixtures
- Use `testdata` directory for test fixtures (automatically ignored by Go toolchain)
- Implement golden file testing for validating complex output
- Use functional builder patterns for complex test data

### 8. Helpers and Cleanup
- **Always mark helper functions with `t.Helper()`** for accurate error reporting
- Use `t.Cleanup()` for resource cleanup (superior to defer in tests)

### 9. Benchmarking (Go 1.24+)
- **Use `B.Loop()` method** as the preferred pattern (prevents compiler optimisations)
- Combine with `benchstat` for statistical analysis
- Use `-benchmem` for memory profiling

### 10. Naming Conventions
- Test functions: `Test*`, `Benchmark*`, `Fuzz*`, `Example*` (capital letter after prefix)
- Use `got` and `want` for actual vs expected values
- Use descriptive test case names in table-driven tests

## Integration vs Unit Testing

- **Separate tests by environment variable** (preferred over build tags)
- See detailed guide for implementation pattern

## Additional Reference Material

**Load `go-testing-best-practices.md` when you need:**
- Complete code examples for table-driven tests, mocking patterns, golden files, helpers, or benchmarks
- Detailed explanation of testing/synctest concurrent testing patterns
- Rationale behind why specific patterns are preferred over alternatives
- Production system examples and statistics (Kubernetes, Docker, Uber, Netflix, ByteDance)
- Context on testing framework choices (Testify, GoMock, Testcontainers)
- Comprehensive coverage strategies and tooling details
- Integration testing patterns with containerisation

**The detailed guide contains full context, examples with explanations, and production-proven patterns. This SKILL.md provides the actionable rules to apply.**

## Key Principle

**Focus on meaningful tests that validate behaviour rather than implementation.** Pragmatic excellence over theoretical perfection.

Overview

This skill applies current Go testing best practices to help you write, review, or refactor Go test files. It focuses on organisation, deterministic concurrency, meaningful assertions, and pragmatic trade-offs to keep tests fast, reliable, and maintainable. Use it when creating new tests or modernising existing test suites.

How this skill works

The skill inspects test structure, naming, and patterns and recommends concrete changes: table-driven tests with descriptive names, map-based tables for unique cases, use of testing/synctest for deterministic concurrency, and preferred comparison and mocking strategies. It highlights file organisation, helper usage, fixture placement, benchmarking patterns, and coverage targets so tests behave predictably in CI and locally.

When to use it

  • Writing new unit or integration tests for Go packages
  • Refactoring or tidying large *_test.go files
  • Converting flaky concurrent tests to deterministic patterns
  • Auditing test suites for maintainability or coverage goals
  • Designing benchmarks or golden-file tests for critical logic

Best practices

  • Place *_test.go files next to source; split tests by area when files exceed ~500–800 lines
  • Prefer map-based table-driven tests with descriptive case names for clearer failures
  • Use testing/synctest (Go 1.24+) for deterministic concurrent tests; if using parallel tests call t.Parallel() first
  • Use cmp.Diff for complex comparisons; standard library assertions for simple checks; use Testify when richer assertions are needed
  • Favor integration tests with real dependencies where practical; use Testcontainers for DB/service tests and simple function-based doubles when mocking
  • Mark helpers with t.Helper() and use t.Cleanup() for teardown; keep fixtures in testdata and use golden files for large outputs

Example use cases

  • Converting flaky time-based goroutine tests to testing/synctest for deterministic runs
  • Refactoring a monolithic handlers_test.go into focused files like handler_auth_test.go and handler_validation_test.go
  • Adding table-driven tests using map tables to ensure unique, descriptive test names
  • Replacing brittle equality checks with cmp.Diff and targeted golden-file tests
  • Writing benchmarks using B.Loop() and analysing results with benchstat

FAQ

What coverage should I aim for?

Aim for 70–80% as a practical target; prioritise meaningful tests over raw percentage metrics.

When should I mock vs use real dependencies?

Prefer integration tests with real dependencies for behaviour validation; mock only when integration is impractical, using simple function doubles or interfaces rather than heavy code generation.