home / skills / fusengine / agents / code-quality
This skill validates code quality across languages by enforcing linters, SOLID principles, and architecture compliance with a six-phase workflow.
npx playbooks add skill fusengine/agents --skill code-qualityReview the files below or copy the command above to add this skill to your agents.
---
name: code-quality
description: Code quality validation with linters, SOLID principles, DRY detection, error detection, and architecture compliance across all languages.
argument-hint: "[file-or-directory]"
user-invocable: true
---
# Code Quality Skill
## π¨ MANDATORY 7-PHASE WORKFLOW
```
PHASE 1: Exploration (explore-codebase) β BLOCKER
PHASE 2: Documentation (research-expert) β BLOCKER
PHASE 3: Impact Analysis (Grep usages) β BLOCKER
PHASE 3.5: DRY Detection (jscpd duplication) β NON-BLOCKING
PHASE 4: Error Detection (linters)
PHASE 5: Precision Correction (with docs + impact + DRY)
PHASE 6: Verification (re-run linters, tests, duplication)
```
**CRITICAL**: Phases 1-3 are BLOCKERS. Never skip them.
**DRY**: Phase 3.5 is non-blocking but findings inform Phase 5 corrections.
---
## PHASE 1: Architecture Exploration
**Launch explore-codebase agent FIRST**:
```
> Use Task tool with subagent_type="explore-codebase"
```
**Gather**:
1. Programming language(s) detected
2. Existing linter configs (.eslintrc, .prettierrc, pyproject.toml)
3. Package managers and installed linters
4. Project structure and conventions
5. Framework versions (package.json, go.mod, Cargo.toml)
6. Architecture patterns (Clean, Hexagonal, MVC)
7. State management (Zustand, Redux, Context)
8. Interface/types directories location
---
## PHASE 2: Documentation Research
**Launch research-expert agent**:
```
> Use Task tool with subagent_type="research-expert"
> Request: Verify [library/framework] documentation for [error type]
> Request: Find [language] best practices for [specific issue]
```
**Request for each error**:
- Official API documentation
- Current syntax and deprecations
- Best practices for error patterns
- Version-specific breaking changes
- Security advisories
- Language-specific SOLID patterns
---
## PHASE 3: Impact Analysis
**For EACH element to modify**: Grep usages β assess risk β document impact.
| Risk | Criteria | Action |
|------|----------|--------|
| π’ LOW | Internal, 0-1 usages | Proceed |
| π‘ MEDIUM | 2-5 usages, compatible | Proceed with care |
| π΄ HIGH | 5+ usages OR breaking | Flag to user FIRST |
---
## PHASE 3.5: Code Duplication Detection (DRY)
**Tool**: `jscpd` β 150+ languages β `npx jscpd ./src --threshold 5 --reporters console,json`
| Level | Threshold | Action |
|-------|-----------|--------|
| π’ Excellent | < 3% | No action needed |
| π‘ Good | 3-5% | Document, fix if time |
| π Acceptable | 5-10% | Extract shared logic |
| π΄ Critical | > 10% | Mandatory refactoring |
See [references/duplication-thresholds.md](references/duplication-thresholds.md) for per-language thresholds, config, and extraction patterns.
See [references/linter-commands.md](references/linter-commands.md) for language-specific jscpd commands.
---
## Linter Commands
See [references/linter-commands.md](references/linter-commands.md) for language-specific commands.
---
## Error Priority Matrix
| Priority | Type | Examples | Action |
|----------|------|----------|--------|
| **Critical** | Security | SQL injection, XSS, CSRF, auth bypass | Fix IMMEDIATELY |
| **High** | Logic | SOLID violations, memory leaks, race conditions | Fix same session |
| **High** | DRY | Code duplication > 10%, copy-paste logic blocks | Mandatory refactoring |
| **Medium** | DRY | Code duplication 5-10%, repeated patterns | Extract shared logic |
| **Medium** | Performance | N+1 queries, deprecated APIs, inefficient algorithms | Fix if time |
| **Low** | Style | Formatting, naming, missing docs | Fix if time |
---
## SOLID Validation
See [references/solid-validation.md](references/solid-validation.md) for S-O-L-I-D detection patterns and fix examples.
---
## File Size Rules
See [references/file-size-rules.md](references/file-size-rules.md) for LoC limits, calculation, and split strategies.
---
## Architecture Rules
See [references/architecture-patterns.md](references/architecture-patterns.md) for project structures and patterns.
---
## Validation Report Format
See [references/validation-report.md](references/validation-report.md) for the complete sniper report template.
---
## Complete Workflow Example
See [references/examples.md](references/examples.md) for detailed walkthrough.
---
## Forbidden Behaviors
### Workflow Violations
- β Skip PHASE 1 (explore-codebase)
- β Skip PHASE 2 (research-expert)
- β Skip PHASE 3 (impact analysis)
- β Skip PHASE 3.5 (DRY detection)
- β Jump to corrections without completing Phases 1-3
- β Proceed when BLOCKER is active
### Code Quality Violations
- β Leave ANY linter errors unfixed
- β Apply fixes that introduce new errors
- β Ignore SOLID violations
- β Ignore DRY violations > 5% duplication
- β Copy-paste code instead of extracting shared logic
- β Create tests if project has none
### Architecture Violations
- β Interfaces in component files (ZERO TOLERANCE)
- β Business logic in components (must be in hooks)
- β Monolithic components (must section)
- β Files >100 LoC without split
- β Local state for global data (use stores)
### Safety Violations
- β High-risk changes without user approval
- β Breaking backwards compatibility silently
- β Modifying public APIs without deprecation
This skill validates code quality across languages by combining automated linters, SOLID principle checks, error detection, and architecture compliance. It enforces a mandatory six-phase workflow that prevents fixes until exploration, research, and impact analysis are complete. The goal is safe, low-risk corrections and verifiable outcomes (linters, types, tests).
First the skill explores the codebase to detect languages, linters, package managers, frameworks, architecture patterns, and interface locations. Next it runs targeted research to fetch official docs, deprecations, and best practices for each issue. It performs usage grep searches and a risk assessment per element, then runs linters, applies precision fixes with documentation and impact notes, and finally re-runs linters/tests for verification.
What happens if a change is high risk?
High-risk changes are flagged and require explicit user approval before modification; the report lists affected files and recommended mitigation.
Are linters run automatically for every language?
Yesβdetected linters or provided configurations drive automated runs; missing configs are reported and suggested commands are included.