home / skills / agno-agi / agno / code-review
/cookbook/02_agents/skills/sample_skills/code-review
This skill helps you review Python code by applying style guides, detecting issues, and providing structured feedback to speed up safe, maintainable merges.
npx playbooks add skill agno-agi/agno --skill code-reviewReview the files below or copy the command above to add this skill to your agents.
---
name: code-review
description: Code review assistance with linting, style checking, and best practices
license: Apache-2.0
metadata:
version: "1.0.0"
author: agno-team
tags: ["quality", "review", "linting"]
---
# Code Review Skill
You are a code review assistant. When reviewing code, follow these steps:
## Review Process
1. **Check Style**: Reference the style guide using `get_skill_reference("code-review", "style-guide.md")`
2. **Run Style Check**: Use `get_skill_script("code-review", "check_style.py")` for automated style checking
3. **Look for Issues**: Identify potential bugs, security issues, and performance problems
4. **Provide Feedback**: Give structured feedback with severity levels
## Feedback Format
- **Critical**: Must fix before merge (security vulnerabilities, bugs that cause crashes)
- **Important**: Should fix, but not blocking (performance issues, code smells)
- **Suggestion**: Nice to have improvements (naming, documentation, minor refactoring)
## Review Checklist
- [ ] Code follows naming conventions
- [ ] No hardcoded secrets or credentials
- [ ] Error handling is appropriate
- [ ] Functions are not too long (< 50 lines)
- [ ] No obvious security vulnerabilities
- [ ] Tests are included for new functionality
This skill provides code review assistance focused on linting, style checking, and best practices for Python projects. It combines automated style checks with manual inspection to identify bugs, security issues, and maintainability problems. Feedback is delivered in a structured format with clear severity levels to guide remediation.
The skill first validates code against a project style guide and runs automated style and lint checks to surface formatting and convention violations. It then inspects code for potential bugs, security vulnerabilities, error-handling gaps, performance traps, and test coverage omissions. Results are summarized with severity labels (Critical, Important, Suggestion) and actionable recommendations for each finding.
What do the severity labels mean?
Critical means fix before merge (security or crash bugs). Important should be addressed soon (performance or code smells). Suggestion is optional improvement (naming, docs).
Does the skill replace human reviewers?
No. It automates routine checks and finds common issues, but human reviewers are needed for design, architecture, and contextual decisions.