home / skills / shotaiuchi / dotclaude / debug-stacktrace

debug-stacktrace skill

/dotclaude/skills/debug-stacktrace

This skill analyzes stack traces and error messages to pinpoint root causes, trace propagation, and correlate logs for effective debugging.

npx playbooks add skill shotaiuchi/dotclaude --skill debug-stacktrace

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
2.0 KB
---
name: debug-stacktrace
description: >-
  Stack trace and error message analysis. Apply when investigating exceptions,
  error chains, failure propagation paths, and crash logs to pinpoint failure
  locations.
user-invocable: false
---

# Stack Trace Analyzer Investigation

Analyze stack traces and error messages to pinpoint the root cause of failures.

## Investigation Checklist

### Exception Chain Analysis
- Identify the root cause exception in nested/chained exceptions
- Check for swallowed exceptions that hide the real failure
- Verify exception types match the actual error condition
- Trace cause-effect relationships through exception wrappers
- Look for generic catch blocks that obscure specific errors

### Error Propagation
- Map the full propagation path from origin to surface
- Check if error context is preserved through rethrows
- Identify where error information is lost or transformed
- Verify error codes and messages remain consistent across layers
- Detect silent failures that produce misleading downstream errors

### Stack Frame Inspection
- Locate the exact frame where the failure originates
- Distinguish application code frames from library/framework frames
- Check for missing frames due to inlining or tail-call optimization
- Identify async boundaries that split the logical call path
- Correlate frame variables with expected state at each level

### Log Correlation
- Match stack traces with surrounding log entries by timestamp
- Identify preceding warnings or errors that indicate preconditions
- Cross-reference thread/request IDs across distributed components
- Check for log level filtering that may hide relevant context
- Reconstruct the timeline of events leading to the failure

## Output Format

Report findings with confidence ratings:

| Confidence | Description |
|------------|-------------|
| High | Root cause clearly identified with supporting evidence |
| Medium | Probable cause identified but needs verification |
| Low | Hypothesis formed but insufficient evidence |
| Inconclusive | Unable to determine from available information |

Overview

This skill analyzes stack traces and error messages to pinpoint root causes of exceptions and crashes. It guides investigators through exception chains, propagation paths, stack frame inspection, and log correlation to produce actionable findings. Reports include confidence ratings and concrete remediation suggestions.

How this skill works

The skill inspects nested and chained exceptions to find the originating error, checks for swallowed or transformed exceptions, and maps the propagation path across layers. It examines stack frames to distinguish application code from framework/library frames, notes async or optimized-frame gaps, and correlates traces with logs and timestamps to reconstruct the failure timeline.

When to use it

  • Investigating runtime crashes, uncaught exceptions, or fatal error events
  • Diagnosing production incidents where stack traces appear in logs or crash reports
  • Triaging chained exceptions to reveal the original failure source
  • Validating whether error context is preserved across service boundaries
  • Analyzing distributed traces to correlate failure across components

Best practices

  • Start by locating the earliest non-library frame that shows the failing operation
  • Follow the cause chain to the deepest exception and verify any wrapper consistency
  • Cross-reference stack timestamps with nearby log entries and request IDs
  • Look for generic catch blocks, swallowed exceptions, and missing context fields
  • Annotate async/worker boundaries and consider inlining/optimization that may hide frames

Example use cases

  • A web service returns 500 errors; use the skill to trace the exception from the controller to the underlying database call
  • A background job crashes intermittently; correlate stack frames with job logs and payload state to reproduce the failure
  • A mobile crash report contains obfuscated frames; identify app frames and infer the failing module
  • A distributed call returns misleading downstream errors; map propagation to find where context or error codes are lost

FAQ

What does a 'High' confidence rating mean?

High means the root cause is clearly identified and supported by stack frames, exception messages, and correlated logs.

What if the stack trace is incomplete or obfuscated?

Mark findings as Medium or Low; attempt log correlation, symbolication, or reproduce with debug builds to improve evidence.