home / skills / jeremylongshore / claude-code-plugins-plus-skills / sentry-performance-tuning

This skill helps optimize Sentry performance tuning by adjusting sampling, reducing overhead, and standardizing transaction naming for better data quality.

npx playbooks add skill jeremylongshore/claude-code-plugins-plus-skills --skill sentry-performance-tuning

Review the files below or copy the command above to add this skill to your agents.

Files (6)
SKILL.md
1.7 KB
---
name: sentry-performance-tuning
description: |
  Optimize Sentry performance monitoring configuration.
  Use when tuning sample rates, reducing overhead,
  or improving performance data quality.
  Trigger with phrases like "sentry performance optimize", "tune sentry tracing",
  "sentry overhead", "improve sentry performance".
allowed-tools: Read, Write, Edit, Grep
version: 1.0.0
license: MIT
author: Jeremy Longshore <[email protected]>
---

# Sentry Performance Tuning

## Prerequisites

- Performance monitoring enabled
- Transaction volume metrics available
- Critical paths identified
- Performance baseline established

## Instructions

1. Implement dynamic sampling with tracesSampler for endpoint-specific rates
2. Configure environment-based sample rates (higher in dev, lower in prod)
3. Remove unused integrations to reduce SDK overhead
4. Limit breadcrumbs to reduce memory usage
5. Use parameterized transaction names to avoid cardinality explosion
6. Create spans only for meaningful slow operations
7. Configure profile sampling sparingly for performance-critical endpoints
8. Measure SDK initialization time and ongoing overhead
9. Implement high-volume optimization with aggressive filtering
10. Monitor SDK performance metrics and adjust configuration

## Output
- Optimized sample rates configured
- SDK overhead minimized
- Transaction naming standardized
- Resource usage reduced

## Error Handling

See `{baseDir}/references/errors.md` for comprehensive error handling.

## Examples

See `{baseDir}/references/examples.md` for detailed examples.

## Resources
- [Sentry Performance](https://docs.sentry.io/product/performance/)
- [Sampling Strategies](https://docs.sentry.io/platforms/javascript/configuration/sampling/)

Overview

This skill optimizes Sentry performance monitoring configuration to reduce overhead and improve the quality of tracing data. It focuses on sampling, transaction naming, span selection, and SDK footprint so you collect the right data without degrading application performance. Use it to create a measurable, low-overhead performance monitoring setup.

How this skill works

The skill inspects current Sentry SDK settings, transaction naming, and sampling logic to identify sources of high volume or high cardinality. It recommends and applies targeted changes: dynamic tracesSampler rules, environment-specific rates, integration pruning, breadcrumb limits, and selective span creation. It also monitors SDK initialization and runtime overhead so you can iterate on configuration.

When to use it

  • Tuning sample rates after an increase in traffic or cost
  • Reducing SDK overhead when CPU/memory budgets are tight
  • Improving trace data quality by reducing cardinality
  • Preparing production rollout with conservative sampling
  • Diagnosing noisy or expensive transactions causing alerts

Best practices

  • Implement tracesSampler with endpoint- or route-specific logic to keep important flows sampled
  • Use lower sample rates in production and higher rates in staging/dev for debugging
  • Parameterize transaction names (avoid user IDs or raw query strings) to limit cardinality
  • Create spans only for meaningful slow operations; avoid instrumenting very high-frequency micro-operations
  • Prune unused integrations and limit breadcrumbs to essential events to reduce memory and CPU use
  • Measure SDK init time and continuous overhead; automate adjustments based on SDK performance metrics

Example use cases

  • Set up dynamic sampling rules so checkout routes sample at 50% while other routes remain at 1%
  • Switch to parameterized transaction names to reduce unique transaction keys and improve aggregation
  • Remove heavy integrations and cap breadcrumbs to prevent client-side memory spikes
  • Enable sparse profiling only for a handful of high-latency endpoints and monitor the impact
  • Create aggressive filtering for high-volume background jobs to avoid trace floods

FAQ

Will lowering sample rates make debugging impossible?

Lowering global sample rates reduces noise but keep higher sampling for critical paths and use higher rates in non-production to retain debugability.

How do I avoid cardinality explosion in transactions?

Use parameterized names (route templates or operation types) instead of including user identifiers, full URLs, or long parameters in transaction names.