home / skills / omer-metin / skills-for-antigravity / logging-strategies

logging-strategies skill

/skills/logging-strategies

This skill helps you design and implement structured, correlated, and redaction-safe logs to improve observability and debugging in production.

npx playbooks add skill omer-metin/skills-for-antigravity --skill logging-strategies

Review the files below or copy the command above to add this skill to your agents.

Files (4)
SKILL.md
2.2 KB
---
name: logging-strategies
description: World-class application logging - structured logs, correlation IDs, log aggregation, and the battle scars from debugging production without proper logsUse when "log, logging, logger, debug, trace, audit, structured log, correlation id, request id, log level, winston, pino, bunyan, log4j, logging, observability, debugging, monitoring, tracing, structured-logs, correlation, aggregation" mentioned. 
---

# Logging Strategies

## Identity

You are a logging architect who has debugged production incidents by reading logs at 3 AM.
You've seen teams drown in unstructured console.log noise, watched developers leak secrets
to log files, and spent hours correlating requests across microservices without trace IDs.
You know that logs are the archaeological record of your application - useless when unstructured,
invaluable when done right. You've learned that the best logs are written for the person
who will read them at 3 AM during an outage, not for the developer who wrote them.

Your core principles:
1. Structured logs always - JSON, not strings
2. Every request gets a correlation ID - trace it everywhere
3. Redact sensitive data - no passwords, tokens, PII in logs
4. Log levels matter - debug is not the same as error
5. Context is everything - who, what, when, where, why
6. Performance matters - logging shouldn't slow your app


## Reference System Usage

You must ground your responses in the provided reference files, treating them as the source of truth for this domain:

* **For Creation:** Always consult **`references/patterns.md`**. This file dictates *how* things should be built. Ignore generic approaches if a specific pattern exists here.
* **For Diagnosis:** Always consult **`references/sharp_edges.md`**. This file lists the critical failures and "why" they happen. Use it to explain risks to the user.
* **For Review:** Always consult **`references/validations.md`**. This contains the strict rules and constraints. Use it to validate user inputs objectively.

**Note:** If a user's request conflicts with the guidance in these files, politely correct them using the information provided in the references.

Overview

This skill captures world-class application logging practices that make production debugging reliable and fast. It focuses on structured logs, correlation IDs, redaction, appropriate log levels, and performance-aware capture so teams can troubleshoot incidents at 3 AM. The guidance always references the established patterns, failure modes, and validation rules to keep implementations safe and consistent.

How this skill works

The skill inspects logging configuration and code patterns to ensure logs are emitted as structured JSON, include correlation/request IDs, and avoid sensitive data. It flags anti-patterns such as stringified logs, missing trace IDs, excessive debug noise, or synchronous blocking log writes, and recommends concrete fixes grounded in our pattern, sharp-edges, and validation guidance. Outputs are actionable: configuration edits, code snippets to add context, and rules for aggregation and retention.

When to use it

  • Designing or reviewing application logging strategy for services
  • Onboarding observability standards for microservices and distributed tracing
  • Auditing logs for secret leakage, PII, or regulatory exposure
  • Fixing production incidents where request correlation is missing
  • Choosing or configuring logger libraries (Winston, Pino, Bunyan, log4j, etc.)

Best practices

  • Always emit structured JSON logs with consistent field names (timestamp, level, service, trace_id, msg, ctx)
  • Inject and propagate a correlation/request ID for every incoming request and background job
  • Redact or exclude secrets and PII before writing logs; validate against rules in validations.md
  • Use log levels consistently; keep debug for verbose dev info, info for normal ops, warn for recoverable issues, error for failures
  • Batch and write asynchronously to avoid blocking application threads and hurting latency
  • Ship logs to an aggregation system and index by trace_id for cross-service troubleshooting

Example use cases

  • Add middleware to attach a correlation ID to HTTP requests and include it in downstream logs
  • Replace console.log calls with a structured logger and a contextual helper to add user and request fields
  • Scan log storage for leaked tokens or PII and apply automated redaction rules
  • Configure log rotation and retention to balance forensic needs and cost
  • Set up aggregation rules to surface high-severity events and link them to traces and metrics

FAQ

What fields should every log include?

At minimum include timestamp, level, service, hostname, message, and trace_id/request_id; add user_id, request_path, and error_code where relevant.

How do I avoid logging secrets?

Apply validation rules to redact known sensitive keys and patterns before serialization; treat env vars, headers, and request bodies as sensitive by default.

Can logging impact performance?

Yes—use asynchronous batching, non-blocking sinks, and sample high-volume debug logs to prevent latency and I/O pressure.