home / skills / lukeslp / dreamer-skills / quality-audit

quality-audit skill

/skills/quality-audit

This skill runs real quality analysis tools against a codebase, producing actionable reports across security, dependencies, perf, and accessibility.

npx playbooks add skill lukeslp/dreamer-skills --skill quality-audit

Review the files below or copy the command above to add this skill to your agents.

Files (3)
SKILL.md
3.2 KB
---
name: quality-audit
description: "Code quality auditing skill that runs real analysis tools in the sandbox. Use when: auditing a codebase for security vulnerabilities, checking dependency health, scanning for accessibility issues, measuring performance, running linters, or generating a comprehensive quality report for a project."
---

# Quality Audit

Run real quality analysis tools against a codebase and produce actionable reports. Unlike guideline-only approaches, this skill installs and executes actual tools (npm audit, bandit, eslint, ruff, etc.) in the Manus sandbox and reports real findings.

## Quick Start

### Full audit (all domains)
```bash
bash /home/ubuntu/skills/quality-audit/scripts/audit.sh /path/to/project
```

### Single domain
```bash
bash /home/ubuntu/skills/quality-audit/scripts/audit.sh /path/to/project --domain security
bash /home/ubuntu/skills/quality-audit/scripts/audit.sh /path/to/project --domain deps
bash /home/ubuntu/skills/quality-audit/scripts/audit.sh /path/to/project --domain perf
bash /home/ubuntu/skills/quality-audit/scripts/audit.sh /path/to/project --domain a11y
```

Reports are saved to `<project>/.audit-reports/` as JSON files for programmatic use.

## Audit Domains

| Domain | What It Checks | Tools Used |
|--------|---------------|------------|
| deps | Vulnerable dependencies, outdated packages | npm audit, pip-audit |
| security | Exposed secrets, dangerous patterns, code vulnerabilities | bandit, grep patterns, .gitignore checks |
| perf | Bundle size, large files, heavy dependencies | du, find, bundle analysis |
| a11y | Alt text, lang attributes, viewport, heading hierarchy | HTML pattern scanning |

## Workflow

### 1. Detect Project Type

The audit script auto-detects the project type from manifest files:

| File Found | Detected Type | Tools Available |
|------------|--------------|-----------------|
| package.json | Node.js | npm audit, eslint, tsc |
| requirements.txt / pyproject.toml | Python | pip-audit, bandit, ruff, mypy |
| Cargo.toml | Rust | cargo audit, clippy |
| go.mod | Go | govulncheck |

### 2. Run Audit

The script installs missing tools automatically (using `sudo pip3 install` or `npx`), runs each tool, and saves structured output to `.audit-reports/`.

### 3. Interpret Results

After running the audit script, read the JSON reports for detailed findings. The console output provides a summary with color-coded PASS/WARN/FAIL indicators.

### 4. Prioritize Fixes

Triage findings by severity:

| Priority | Category | Action |
|----------|----------|--------|
| Critical | Exposed secrets, known CVEs | Fix immediately |
| High | Security vulnerabilities, XSS risks | Fix before deploy |
| Medium | Outdated dependencies, missing a11y | Fix in next sprint |
| Low | Code style, minor warnings | Fix opportunistically |

## Advanced Tools

For deeper analysis beyond the basic audit script, see `references/tools.md` for tool-specific commands covering linting, type checking, test coverage, bundle analysis, license compliance, and more.

## Parallel Audits

For large projects or monorepos, combine with the `swarm` skill to audit multiple packages in parallel. Each subtask can run the audit script on a different package and return structured results for comparison.

Overview

This skill runs real quality analysis tools inside a sandboxed environment and generates actionable JSON reports for a project. It executes dependency audits, security scanners, linters, accessibility checks, and performance probes rather than just offering guidelines. Reports are stored per-project for programmatic consumption and human triage.

How this skill works

The audit script auto-detects project type from manifest files (package.json, requirements.txt, Cargo.toml, go.mod) and installs missing tools as needed. It runs domain-specific tools (npm audit, pip-audit, bandit, eslint, ruff, etc.), captures structured output, and writes JSON reports to <project>/.audit-reports/. The console summary highlights PASS/WARN/FAIL and the reports include severity metadata to prioritize fixes.

When to use it

  • Before a release to catch security and dependency issues
  • During sprint planning to prioritize technical debt and accessibility work
  • When onboarding a codebase to quickly assess health across security, deps, perf, and a11y
  • For CI or periodic checks to generate reproducible, machine-readable audit reports
  • When evaluating a monorepo or multiple packages (combine with parallel-run workflows)

Best practices

  • Run a full audit first, then use domain flags (security, deps, perf, a11y) for targeted rechecks
  • Keep .audit-reports/ under version control or upload to artifact storage for traceability
  • Triage findings by severity: fix critical/exposed secrets immediately, schedule medium/low items in sprints
  • Run audits in CI agents with caching for installed tools to speed repeated runs
  • For large projects, split packages and run audits in parallel to reduce overall time

Example use cases

  • Perform a one-off security sweep before public release and produce JSON for ticket generation
  • Automate dependency health checks in CI to block builds with critical CVEs
  • Audit accessibility on static HTML bundles to locate missing alt attributes, lang tags, and heading issues
  • Run bundle size and file size scans to find heavy assets and large dependencies
  • Integrate with a parallel-run workflow to compare health across monorepo packages

FAQ

Where are the audit results saved?

Results are written as JSON files to <project>/.audit-reports/ for each domain and a summary is printed to the console.

Can I run only one domain of checks?

Yes. Use the --domain flag with values like security, deps, perf, or a11y to run a targeted audit.

Does the script install tools automatically?

Yes. The script attempts to install missing tools (pip3, npx, etc.) in the sandbox before running each analyzer.