home / skills / secondsky / claude-skills / logging-best-practices

This skill helps implement secure structured logging with proper levels, context, and PII sanitization to improve observability.

npx playbooks add skill secondsky/claude-skills --skill logging-best-practices

Review the files below or copy the command above to add this skill to your agents.

Files (2)
SKILL.md
2.6 KB
---
name: logging-best-practices
description: Structured logging with proper levels, context, PII handling, centralized aggregation. Use for application logging, log management integration, distributed tracing, or encountering log bloat, PII exposure, missing context errors.
---

# Logging Best Practices

Implement secure, structured logging with proper levels and context.

## Log Levels

| Level | Use For | Production |
|-------|---------|------------|
| DEBUG | Detailed debugging | Off |
| INFO | Normal operations | On |
| WARN | Potential issues | On |
| ERROR | Errors with recovery | On |
| FATAL | Critical failures | On |

## Structured Logging (Winston)

```javascript
const winston = require('winston');

const logger = winston.createLogger({
  level: process.env.LOG_LEVEL || 'info',
  format: winston.format.combine(
    winston.format.timestamp(),
    winston.format.json()
  ),
  defaultMeta: { service: 'api-service' },
  transports: [
    new winston.transports.Console(),
    new winston.transports.File({ filename: 'error.log', level: 'error' })
  ]
});

// Usage
logger.info('User logged in', { userId: '123', ip: '192.168.1.1' });
logger.error('Payment failed', { error: err.message, orderId: '456' });
```

## Request Context

```javascript
const { AsyncLocalStorage } = require('async_hooks');
const storage = new AsyncLocalStorage();

app.use((req, res, next) => {
  const context = {
    requestId: req.headers['x-request-id'] || uuid(),
    userId: req.user?.id
  };
  storage.run(context, next);
});

function log(level, message, meta = {}) {
  const context = storage.getStore() || {};
  logger.log(level, message, { ...context, ...meta });
}
```

## PII Sanitization

```javascript
const sensitiveFields = ['password', 'ssn', 'creditCard', 'token'];

function sanitize(obj) {
  const sanitized = { ...obj };
  for (const field of sensitiveFields) {
    if (sanitized[field]) sanitized[field] = '[REDACTED]';
  }
  if (sanitized.email) {
    sanitized.email = sanitized.email.replace(/(.{2}).*@/, '$1***@');
  }
  return sanitized;
}
```

## Best Practices

- Use structured JSON format
- Include correlation IDs across services
- Sanitize all PII before logging
- Use async logging for performance
- Implement log rotation
- Never log at DEBUG in production

## Additional Implementations

See [references/advanced-logging.md](references/advanced-logging.md) for:
- Python structlog setup
- Go zap high-performance logging
- ELK Stack integration
- AWS CloudWatch configuration
- OpenTelemetry tracing

## Never Do

- Log passwords or tokens
- Use console.log in production
- Log inside tight loops
- Include stack traces for client errors

Overview

This skill teaches secure, structured logging practices for TypeScript applications. It focuses on proper log levels, contextual metadata, PII sanitization, and integration with centralized log systems. The goal is reliable, searchable logs that protect user data and support debugging at scale.

How this skill works

The skill shows how to configure a structured JSON logger (example with Winston) and wire request context into every log entry using AsyncLocalStorage. It demonstrates sanitization routines to redact sensitive fields and patterns before logs are emitted. It also explains transports, rotation, and how to route logs into aggregation and tracing systems.

When to use it

  • Building new services that need production-grade observability
  • Integrating logs with ELK, CloudWatch, or other aggregators
  • You see log bloat, missing correlation IDs, or leaked PII
  • Onboard distributed tracing and cross-service correlation
  • Optimizing performance by reducing synchronous logging overhead

Best practices

  • Adopt structured JSON format and consistent fields (timestamp, level, service, requestId)
  • Use appropriate levels: DEBUG off in production, INFO/WARN/ERROR/FATAL enabled
  • Propagate correlation IDs and include user/request context via AsyncLocalStorage
  • Sanitize or redact PII and mask emails before logging
  • Write logs asynchronously and enable rotation to avoid disk exhaustion
  • Avoid console.log in production and don’t log secrets or heavy stack traces for client errors

Example use cases

  • API service: include requestId and userId in every log to trace errors across services
  • Payment processing: redact creditCard and token fields and log errors at ERROR level
  • Background jobs: use WARN/ERROR for failures and INFO for lifecycle events
  • Distributed system: inject correlation IDs to connect logs with traces and metrics
  • Log management migration: switch transports to push JSON logs to ELK or CloudWatch

FAQ

How do I keep sensitive data out of logs?

Sanitize inputs before logging by redacting known sensitive fields and masking patterns like emails and tokens; centralize sanitization to avoid omissions.

When should I enable DEBUG level?

Enable DEBUG only in development or controlled troubleshooting windows. In production, keep DEBUG off to reduce volume and noise.