home / skills / pluginagentmarketplace / custom-plugin-devops / logging

logging skill

/skills/logging

This skill centralizes logging across distributed systems using ELK, Loki, and Fluentd to enable parsing, retention, and actionable insights.

npx playbooks add skill pluginagentmarketplace/custom-plugin-devops --skill logging

Review the files below or copy the command above to add this skill to your agents.

Files (7)
SKILL.md
752 B
---
name: logging
description: Centralized logging with ELK Stack, Loki, Fluentd, and log analysis for distributed systems
sasmp_version: "1.3.0"
bonded_agent: 06-monitoring-observability
bond_type: PRIMARY_BOND
---

# Logging Skill

## MANDATORY
- ELK Stack (Elasticsearch, Logstash, Kibana)
- Fluentd/Fluent Bit log collection
- Loki and Promtail
- Log formats and parsing
- Index management and retention

## OPTIONAL
- Splunk fundamentals
- Graylog setup
- Log-based alerting
- Structured logging patterns
- Log correlation

## ADVANCED
- Log analytics and ML
- Multi-cluster log aggregation
- Compliance and audit logging
- High-volume log processing
- Custom log pipelines

## Assets
- See `assets/logging-stack.yaml` for configuration templates

Overview

This skill implements centralized logging for distributed systems using ELK Stack, Loki, Fluentd/Fluent Bit, and log analysis techniques. It focuses on reliable collection, parsing, indexing, and retention so teams can search, visualize, and alert on operational data. The skill includes patterns for structured logging, index management, and scalable pipelines.

How this skill works

The skill wires collectors (Fluentd/Fluent Bit, Promtail) to ingestion back ends (Logstash, Loki, Elasticsearch) and configures parsers and pipelines to normalize log formats. It sets index lifecycle policies and retention rules so storage and query performance remain predictable. It also provides guidance for log correlation, alerting hooks, and optional integrations with Splunk or Graylog.

When to use it

  • Aggregating logs from multiple microservices, clusters, or data centers
  • When you need centralized search, dashboards, and ad-hoc analysis of logs
  • To implement retention and compliance policies across logs
  • When deploying observability for CI/CD pipelines and production systems
  • When you need scalable pipelines for high-volume log processing

Best practices

  • Emit structured logs (JSON) to simplify parsing and enable fielded queries
  • Use Fluentd/Fluent Bit close to the source and Promtail for Loki to minimize loss
  • Define index lifecycle and retention policies to control storage costs
  • Parse and enrich logs at ingestion to tag services, environments, and request IDs
  • Correlate logs with traces and metrics using consistent identifiers

Example use cases

  • Centralize application, access, and audit logs from Kubernetes clusters into Elasticsearch and Kibana dashboards
  • Use Loki + Grafana for cost-effective, high-cardinality log queries alongside metrics
  • Set up Fluentd pipelines to parse legacy log formats and forward structured events
  • Implement log-based alerts for error rate spikes and security-relevant events
  • Aggregate multi-cluster logs with routing rules and per-tenant index management

FAQ

Which collector should I choose for Kubernetes?

Fluent Bit is lightweight and suited for edge collection in Kubernetes; Fluentd provides richer plugins for heavy processing before forwarding.

How do I control storage costs?

Apply index lifecycle management, compress older indices, and tier storage; consider Loki for long-term, label-based retention when full-text search is not required.