home / skills / andrelandgraf / fullstackrecipes / pino-logging-setup

pino-logging-setup skill

/.agents/skills/pino-logging-setup

This skill configures Pino logging with development friendly colorized output and production JSON for aggregation, boosting observability across apps.

npx playbooks add skill andrelandgraf/fullstackrecipes --skill pino-logging-setup

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
544 B
---
name: pino-logging-setup
description: Configure structured logging with Pino. Outputs human-readable colorized logs in development and structured JSON in production for log aggregation services.
---

# Pino Logging Setup

To set up Pino Logging Setup, refer to the fullstackrecipes MCP server resource:

**Resource URI:** `recipe://fullstackrecipes.com/pino-logging-setup`

If the MCP server is not configured, fetch the recipe directly:

```bash
curl -H "Accept: text/plain" https://fullstackrecipes.com/api/recipes/pino-logging-setup
```

Overview

This skill configures structured logging with Pino for TypeScript full‑stack apps. It emits human‑readable, colorized logs in development and structured JSON in production so logs can be ingested by aggregation services. The setup includes sensible defaults, serializers, and environment detection to minimize manual tuning.

How this skill works

The skill provides a Pino configuration that switches transports and formatting based on NODE_ENV. In development it uses a pretty‑print transport with colors and readable timestamps. In production it disables pretty printing and emits compact JSON with consistent fields (level, time, pid, service, msg) and optional serializers for errors and request/response objects to keep logs parsable by downstream systems.

When to use it

  • During initial project setup to add consistent logging across server code.
  • When you need readable logs locally but structured JSON for log aggregation in production.
  • When deploying to container platforms or log collectors like ELK, Loki, or Datadog.
  • When you want TypeScript‑friendly typings and safe serializers for error objects.

Best practices

  • Detect environment from NODE_ENV and avoid pretty printing in production to reduce parsing issues.
  • Normalize common fields (service, requestId, userId) to aid querying in log systems.
  • Use serializers for Error and HTTP objects to avoid circular references and reduce noise.
  • Set log level via env var (e.g., LOG_LEVEL) and default to info in production, debug in development.
  • Avoid logging sensitive data; redact or omit PII before emitting logs.

Example use cases

  • Express or Fastify API: attach a request logger that records method, path, status, latency, and requestId.
  • Next.js API routes or serverless functions: produce compact JSON logs for cloud logging services.
  • Kubernetes deployments: ship structured logs to Fluentd/Fluent Bit and query by service and requestId.
  • Local development: readable, colorized output for faster debugging without changing production behavior.

FAQ

How do I switch between pretty and JSON output?

The config checks NODE_ENV and toggles the pretty transport; set NODE_ENV=production to enable JSON output.

Can I add custom fields to every log entry?

Yes. Attach a base object (for example service and version) to the logger so those fields appear on every record.

How should I handle error objects?

Use Pino serializers to extract message, stack, and code from Error instances to produce consistent, non‑circular JSON.