home / skills / ntaksh42 / agents / tech-debt-analyzer

tech-debt-analyzer skill

/.claude/skills/tech-debt-analyzer

This skill analyzes and prioritizes technical debt across code quality factors and provides actionable refactoring recommendations to reduce risk and cost.

npx playbooks add skill ntaksh42/agents --skill tech-debt-analyzer

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
2.0 KB
---
name: tech-debt-analyzer
description: Analyze and prioritize technical debt with refactoring recommendations. Use when evaluating code quality or planning debt reduction.
---

# Tech Debt Analyzer Skill

技術的負債を分析し、優先順位を付けるスキルです。

## 主な機能

- **コード複雑度**: サイクロマティック複雑度
- **重複コード**: コピペ検出
- **古い依存関係**: アップデート必要な dependencies
- **TODO/FIXME**: 未解決タスク
- **テストカバレッジ**: カバレッジ不足箇所
- **優先順位付け**: 影響度とコストで評価

## 分析レポート例

```markdown
# 技術的負債分析レポート

## サマリー
- **総負債スコア**: 157ポイント
- **推定解消時間**: 8週間
- **Critical**: 3件
- **High**: 12件
- **Medium**: 25件

## Critical 負債 (即時対応必須)

### 1. 古いNode.jsバージョン (v14)
- **影響**: セキュリティリスク、パフォーマンス低下
- **コスト**: 2日
- **優先度**: 🔴 Critical
- **アクション**: Node.js 18にアップグレード

### 2. テストカバレッジ不足 (42%)
- **影響**: バグリスク増加
- **コスト**: 2週間
- **優先度**: 🔴 Critical
- **アクション**: カバレッジ80%を目標にテスト追加

### 3. 脆弱な依存関係 (lodash 4.17.15)
- **影響**: CVE-2020-8203
- **コスト**: 1時間
- **優先度**: 🔴 Critical
- **アクション**: npm update lodash

## High 負債

### コード複雑度
- **ファイル**: `src/order/processor.ts`
- **複雑度**: 45 (推奨: <10)
- **コスト**: 3日
- **アクション**: リファクタリング

### 重複コード
- **箇所**: 15箇所
- **重複率**: 23%
- **コスト**: 1週間
- **アクション**: 共通化

## 推奨実行順序

1. 脆弱性修正 (1時間)
2. Node.jsアップグレード (2日)
3. Critical な複雑コードのリファクタリング (1週間)
4. テストカバレッジ向上 (2週間)
5. 重複コード解消 (1週間)
```

## バージョン情報
- Version: 1.0.0

Overview

This skill analyzes and prioritizes technical debt across a codebase, then generates concrete refactoring and remediation recommendations. It combines static metrics like cyclomatic complexity and duplication with dependency health, TODOs, and test coverage to produce a ranked action plan. The output is a prioritized report with estimated effort and impact to guide remediation sprints.

How this skill works

The analyzer scans source files for complexity hotspots, duplicated blocks, and TODO/FIXME markers while inspecting dependency manifests for outdated or vulnerable packages. It aggregates test coverage data and scores each finding by impact and estimated remediation cost. Findings are ranked into Critical/High/Medium groups with recommended remediation steps and an ordered execution plan.

When to use it

  • Before a release to identify high-risk issues
  • When planning a refactoring or technical debt reduction sprint
  • During onboarding to orient new engineers to risky areas
  • After dependency audits or security scans
  • When test coverage is below target and regressions increase

Best practices

  • Run the analyzer on the full repository including build/test artifacts to get accurate coverage and dependency data
  • Treat Critical items (vulnerabilities, failing tests, very high complexity) as immediate priorities
  • Estimate remediation cost conservatively and include integration/testing overhead
  • Use the recommended execution order to balance quick fixes and longer refactors
  • Track progress by re-running the analyzer after each remediation cycle to measure debt reduction

Example use cases

  • Detect and prioritize vulnerable or outdated dependencies for urgent patching
  • Identify files with dangerously high cyclomatic complexity for targeted refactoring
  • Find and consolidate duplicated code paths to reduce maintenance cost
  • Create a time-boxed plan to raise test coverage to a safe threshold
  • Produce a sprint backlog of technical debt tasks with effort estimates and priorities

FAQ

What inputs does the analyzer require?

Point it at the repository root; it reads source files, dependency manifests, and test coverage reports if available.

How are priorities determined?

Findings are scored by impact (security, correctness, maintainability) and estimated remediation cost; this score determines Critical/High/Medium grouping and execution order.