home / skills / julianobarbosa / claude-code-skills / reviewing-code-skill

reviewing-code-skill skill

/skills/reviewing-code-skill

This skill provides code reviews to improve quality, detect bugs, and enforce best practices across Python projects.

npx playbooks add skill julianobarbosa/claude-code-skills --skill reviewing-code-skill

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
849 B
---
name: reviewing-code
description: Get code review from Codex AI for implementation quality, bug detection, and best practices. Use when asked to review code, check for bugs, find security issues, or get feedback on implementation patterns.
allowed-tools: Read, Grep, Glob, mcp__codex__spawn_agent
---

# Code Review with Codex

Use `mcp__codex__spawn_agent` for code review.

## When to Use

- Review code quality and patterns
- Find potential bugs or edge cases
- Validate against best practices
- Check for security issues

## Usage

```json
{
  "prompt": "Review this code for [quality/bugs/security]: [code or file]"
}
```

## Prompt Examples

- "Review for code quality and Go best practices: [code]"
- "Analyze for security vulnerabilities: [code]"
- "Review for performance issues: [code]"
- "Does this follow idiomatic patterns? [code]"

Overview

This skill provides code review from Codex AI focused on implementation quality, bug detection, and best practices. It evaluates Python code for correctness, security, performance, and style, and returns actionable suggestions. Use it to get concise, prioritized feedback you can apply directly to your codebase.

How this skill works

Send a prompt containing the code or file and specify the review focus (quality, bugs, security, performance, or idiomatic style). The agent inspects control flow, edge cases, API usage, error handling, and common vulnerability patterns. It highlights issues, explains why they matter, and suggests concrete fixes or alternatives.

When to use it

  • Before merging a change that affects core logic or security
  • When you want an automated second opinion on bug risks and edge cases
  • To validate adherence to language idioms and style conventions
  • When assessing performance hotspots or inefficient algorithms
  • To find and prioritize security vulnerabilities in new or legacy code

Best practices

  • Include the specific focus in the prompt (e.g., 'security', 'performance')
  • Provide the smallest reproducible code snippet or relevant files for targeted feedback
  • Mention runtime constraints, dependencies, and expected behavior to reduce false positives
  • Request prioritized fixes and code examples for the highest-impact issues
  • Run suggested fixes in tests or a sandbox before applying to production

Example use cases

  • Review a pull request for potential race conditions and resource leaks
  • Analyze a module for SQL injection, XSS, and insecure deserialization
  • Improve performance in a data-processing loop and recommend algorithmic changes
  • Check a new API client implementation for proper error handling and retries
  • Assess whether code follows idiomatic Python patterns and PEP8 style

FAQ

What input format works best?

Provide the relevant code snippet or file and a short instruction specifying the review focus. Small, focused examples get faster, more precise feedback.

Can it find security vulnerabilities reliably?

It catches many common issues and risky patterns, but should complement static analysis and human review for comprehensive security assurance.