home / skills / lukeslp / dreamer-skills / quality-audit
This skill runs real quality analysis tools against a codebase, producing actionable reports across security, dependencies, perf, and accessibility.
npx playbooks add skill lukeslp/dreamer-skills --skill quality-auditReview the files below or copy the command above to add this skill to your agents.
---
name: quality-audit
description: "Code quality auditing skill that runs real analysis tools in the sandbox. Use when: auditing a codebase for security vulnerabilities, checking dependency health, scanning for accessibility issues, measuring performance, running linters, or generating a comprehensive quality report for a project."
---
# Quality Audit
Run real quality analysis tools against a codebase and produce actionable reports. Unlike guideline-only approaches, this skill installs and executes actual tools (npm audit, bandit, eslint, ruff, etc.) in the Manus sandbox and reports real findings.
## Quick Start
### Full audit (all domains)
```bash
bash /home/ubuntu/skills/quality-audit/scripts/audit.sh /path/to/project
```
### Single domain
```bash
bash /home/ubuntu/skills/quality-audit/scripts/audit.sh /path/to/project --domain security
bash /home/ubuntu/skills/quality-audit/scripts/audit.sh /path/to/project --domain deps
bash /home/ubuntu/skills/quality-audit/scripts/audit.sh /path/to/project --domain perf
bash /home/ubuntu/skills/quality-audit/scripts/audit.sh /path/to/project --domain a11y
```
Reports are saved to `<project>/.audit-reports/` as JSON files for programmatic use.
## Audit Domains
| Domain | What It Checks | Tools Used |
|--------|---------------|------------|
| deps | Vulnerable dependencies, outdated packages | npm audit, pip-audit |
| security | Exposed secrets, dangerous patterns, code vulnerabilities | bandit, grep patterns, .gitignore checks |
| perf | Bundle size, large files, heavy dependencies | du, find, bundle analysis |
| a11y | Alt text, lang attributes, viewport, heading hierarchy | HTML pattern scanning |
## Workflow
### 1. Detect Project Type
The audit script auto-detects the project type from manifest files:
| File Found | Detected Type | Tools Available |
|------------|--------------|-----------------|
| package.json | Node.js | npm audit, eslint, tsc |
| requirements.txt / pyproject.toml | Python | pip-audit, bandit, ruff, mypy |
| Cargo.toml | Rust | cargo audit, clippy |
| go.mod | Go | govulncheck |
### 2. Run Audit
The script installs missing tools automatically (using `sudo pip3 install` or `npx`), runs each tool, and saves structured output to `.audit-reports/`.
### 3. Interpret Results
After running the audit script, read the JSON reports for detailed findings. The console output provides a summary with color-coded PASS/WARN/FAIL indicators.
### 4. Prioritize Fixes
Triage findings by severity:
| Priority | Category | Action |
|----------|----------|--------|
| Critical | Exposed secrets, known CVEs | Fix immediately |
| High | Security vulnerabilities, XSS risks | Fix before deploy |
| Medium | Outdated dependencies, missing a11y | Fix in next sprint |
| Low | Code style, minor warnings | Fix opportunistically |
## Advanced Tools
For deeper analysis beyond the basic audit script, see `references/tools.md` for tool-specific commands covering linting, type checking, test coverage, bundle analysis, license compliance, and more.
## Parallel Audits
For large projects or monorepos, combine with the `swarm` skill to audit multiple packages in parallel. Each subtask can run the audit script on a different package and return structured results for comparison.
This skill runs real quality analysis tools inside a sandboxed environment and generates actionable JSON reports for a project. It executes dependency audits, security scanners, linters, accessibility checks, and performance probes rather than just offering guidelines. Reports are stored per-project for programmatic consumption and human triage.
The audit script auto-detects project type from manifest files (package.json, requirements.txt, Cargo.toml, go.mod) and installs missing tools as needed. It runs domain-specific tools (npm audit, pip-audit, bandit, eslint, ruff, etc.), captures structured output, and writes JSON reports to <project>/.audit-reports/. The console summary highlights PASS/WARN/FAIL and the reports include severity metadata to prioritize fixes.
Where are the audit results saved?
Results are written as JSON files to <project>/.audit-reports/ for each domain and a summary is printed to the console.
Can I run only one domain of checks?
Yes. Use the --domain flag with values like security, deps, perf, or a11y to run a targeted audit.
Does the script install tools automatically?
Yes. The script attempts to install missing tools (pip3, npx, etc.) in the sandbox before running each analyzer.