home / skills / dexploarer / hyper-forge / log-aggregation-configurator

log-aggregation-configurator skill

/.claude/skills/log-aggregation-configurator

This skill helps you configure centralized logging with elk, loki, or splunk, enabling scalable log management and observability across services.

npx playbooks add skill dexploarer/hyper-forge --skill log-aggregation-configurator

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
1.1 KB
---
name: log-aggregation-configurator
description: Set up centralized logging with ELK, Loki, or Splunk for log management
allowed-tools: [Read, Write, Edit, Bash, Grep, Glob]
---

# log aggregation configurator

Set up centralized logging with ELK, Loki, or Splunk for log management

## When to Use

This skill activates when you need to set up centralized logging with elk, loki, or splunk for log management.

## Quick Example

```yaml
# Configuration example for log-aggregation-configurator
# See full documentation in the skill implementation
```

## Best Practices

- ✅ Follow industry standards
- ✅ Document all configurations
- ✅ Test thoroughly before production
- ✅ Monitor and alert appropriately
- ✅ Regular maintenance and updates

## Related Skills

- `microservices-orchestrator`
- `compliance-auditor`
- Use `enterprise-architect` agent for design consultation

## Implementation Guide

[Detailed implementation steps would go here in production]

This skill provides comprehensive guidance for set up centralized logging with elk, loki, or splunk for log management.

Overview

This skill configures centralized logging for applications using ELK (Elasticsearch, Logstash, Kibana), Grafana Loki, or Splunk. It provides step-by-step guidance to collect, parse, store, and visualize logs across distributed services. The focus is on practical setup, integration tips, and production readiness for TypeScript-based platforms.

How this skill works

The skill inspects your logging needs and recommends an appropriate stack (ELK, Loki, or Splunk) based on retention, search, and resource constraints. It outlines agent deployment (Filebeat/Fluentd/Promtail), parsing rules, index/tenant design, and dashboard configuration. It also advises on secure transport, access controls, and alerting integrations.

When to use it

  • You need centralized log collection across microservices or game asset pipelines.
  • You want full-text search, structured log queries, or tenant-aware storage.
  • You must meet audit or compliance requirements for log retention and access.
  • You need scalable, searchable logs for debugging production incidents.
  • You plan to integrate logging with metrics and traces for observability

Best practices

  • Define a consistent structured log format (JSON) and standard fields across services.
  • Use lightweight agents (Promtail/Filebeat) and central parsing to reduce service overhead.
  • Apply index lifecycle policies or retention rules to control storage costs.
  • Secure data in transit with TLS and enforce RBAC for dashboards and APIs.
  • Validate parsing rules and dashboards in staging before promoting to production

Example use cases

  • Aggregate logs from a TypeScript-based 3D asset generation pipeline to debug export failures.
  • Provide developers searchable application logs and Kibana dashboards for performance issues.
  • Route debug and audit logs to different indexes or tenants to support multi-team access controls.
  • Send critical error alerts from ELK/Loki/Splunk into PagerDuty or Slack for on-call response.
  • Comply with retention policies by moving older logs to cold storage or archived indexes

FAQ

Which stack is best for low-cost, high-volume logs?

Loki is optimized for high-volume, low-cost storage of logs when you use label-based queries and compressed storage. ELK offers richer full-text search at higher storage cost.

How do I handle sensitive data in logs?

Mask or redact sensitive fields at the agent or ingestion layer, apply strict ACLs, and audit access to logs regularly.