home / skills / aidotnet / moyucode / log-analyzer

log-analyzer skill

/skills/tools/log-analyzer

This skill analyzes log files, enabling pattern search, filtering, statistics, and error detection to improve debugging and operational insights.

npx playbooks add skill aidotnet/moyucode --skill log-analyzer

Review the files below or copy the command above to add this skill to your agents.

Files (2)
SKILL.md
965 B
---
name: log-analyzer
description: 解析和分析日志文件,支持模式匹配、过滤、统计和错误检测。
metadata:
  short-description: 分析日志文件
source:
  repository: https://github.com/logpai/logparser
  license: MIT
---

# Log Analyzer Tool

## Description
Parse and analyze log files to extract patterns, filter entries, generate statistics, and detect errors.

## Trigger
- `/logs` command
- User needs to analyze logs
- User wants to find errors in logs

## Usage

```bash
# Analyze log file
python scripts/log_analyzer.py app.log

# Filter by level
python scripts/log_analyzer.py app.log --level ERROR

# Search pattern
python scripts/log_analyzer.py app.log --grep "connection failed"

# Get statistics
python scripts/log_analyzer.py app.log --stats

# Tail mode
python scripts/log_analyzer.py app.log --tail 100
```

## Tags
`logs`, `analysis`, `debugging`, `monitoring`, `errors`

## Compatibility
- Codex: ✅
- Claude Code: ✅

Overview

This skill parses and analyzes log files to extract patterns, filter entries, produce statistics, and detect errors. It provides command-style controls for level filtering, pattern searches, tailing, and summary statistics to speed debugging and monitoring. Designed for TypeScript-based environments but useful for any plain-text logs.

How this skill works

The tool reads log files and applies user-specified operations: pattern matching (grep-style), level filtering (e.g., ERROR), and real-time tailing. It computes simple statistics (counts by level, frequency of messages, time-window summaries) and flags likely errors or anomalous spikes. Results are printed to the console or streamed for pipelines.

When to use it

  • Investigating application crashes or repeated errors
  • Filtering large logs to focus on specific severity levels
  • Searching for occurrences of a text or regex pattern
  • Generating quick statistics about traffic or error rates
  • Tailing logs in real time during deployments or troubleshooting

Best practices

  • Run pattern searches with precise regex to reduce noise
  • Combine level filters with time ranges when available to target relevant windows
  • Use the --stats summary before deep inspection to prioritize investigation
  • Pipe filtered output to other tools (grep, awk, jq) for complex analysis
  • Keep a copy of raw logs before aggressive filtering to preserve auditability

Example use cases

  • Run a full analysis to enumerate ERROR occurrences and top error messages from a crash window
  • Tail the last 100 lines of a service log during a deploy to watch for regressions
  • Filter by level ERROR to create a short report for an on-call rotation
  • Search logs for 'connection failed' across multiple files to identify network issues
  • Generate statistics to compare error rates before and after a code change

FAQ

What input formats are supported?

Plain-text log files with line-based entries. Structured JSON logs are supported if lines contain parseable JSON objects.

Can I use regex patterns?

Yes. The search option accepts regular expressions for flexible pattern matching.