home / skills / einverne / dotfiles / code-reviewer

code-reviewer skill

/skills/code-reviewer

This skill performs systematic code reviews to identify quality, security, and performance issues and provide actionable improvement recommendations.

npx playbooks add skill einverne/dotfiles --skill code-reviewer

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
2.4 KB
---
name: code-reviewer
description: 进行系统化的代码审查,检查代码质量、安全性和性能。当用户要求审查代码、review 或检查代码时使用
---

# 代码审查助手

## 审查目标

对代码进行全面、系统的质量检查,识别潜在问题并提供改进建议。

## 审查维度

必须按以下顺序检查:

### 功能正确性
1. 逻辑完整性:功能是否按预期工作
2. 边界条件:是否处理了极端情况
3. 错误处理:异常是否被妥善处理
4. 数据验证:输入是否经过验证

### 代码质量
1. 命名规范:变量、函数、类名是否清晰表意
2. 代码复杂度:单个函数是否过于复杂(圈复杂度 >10 需要重构)
3. 重复代码:是否存在可提取的重复逻辑(DRY 原则)
4. 注释质量:复杂逻辑是否有充分注释

### 性能考虑
1. 算法效率:时间和空间复杂度是否合理
2. 资源管理:内存、文件、网络资源是否正确释放
3. 数据库查询:是否存在 N+1 查询问题
4. 缓存策略:是否合理使用缓存

### 安全性
1. 输入验证:用户输入是否经过清理
2. SQL 注入:数据库操作是否使用参数化查询
3. XSS 防护:前端输出是否转义
4. 认证授权:权限检查是否完整
5. 敏感信息:是否暴露密钥、密码等

### 可维护性
1. 模块化:代码组织是否清晰
2. 单一职责:每个模块是否职责明确
3. 依赖管理:依赖是否合理
4. 测试覆盖:是否有足够的单元测试

## 输出格式

按严重程度分类报告:

### 严重(必须修复)
阻塞问题,必须在合并前解决

### 重要(强烈建议)
显著影响质量或安全,应尽快解决

### 次要(建议改进)
可以改进的地方,不阻塞合并

### 优化建议
性能或可读性优化建议

每个问题必须包含:
- 文件路径和行号
- 问题描述
- 影响分析
- 修复建议或示例代码

## 示例输出

```
## 审查结果

### 严重(1 个问题)

**SQL 注入风险** - src/api/user.js:45
问题:直接拼接用户输入到 SQL 查询
风险:攻击者可以注入恶意 SQL 代码
建议:使用参数化查询
```javascript
// 修复前
const query = `SELECT * FROM users WHERE id = ${userId}`;

// 修复后
const query = 'SELECT * FROM users WHERE id = ?';
db.execute(query, [userId]);
```

### 重要(2 个问题)

...
```

Overview

This skill performs systematic code reviews focused on correctness, quality, performance, security, and maintainability. It is tailored for repositories like dotfiles and Python projects but applies general best practices across languages and configs. Reviews return actionable findings prioritized by severity.

How this skill works

I inspect code in a fixed order: functional correctness, code quality, performance, security, and maintainability. For each issue I report file path and line context, describe the impact, and provide concrete remediation or example fixes. Output is organized by severity (Critical, Major, Minor, Optimization) to guide triage and fixes.

When to use it

  • When requesting a full repository or pull-request review
  • Before merging changes to ensure no regressions or secret leaks
  • When hard-to-debug runtime or performance issues appear
  • During security audits for injection or credential exposure
  • When improving test coverage or refactoring critical modules

Best practices

  • Run reviews against a single branch or PR to keep context clear
  • Include minimal reproducer or test steps for functional issues
  • Attach config files (tmux, zsh, vimrc) when reviewing dotfiles to validate expected behavior
  • Prioritize Critical issues that block merges, then address Major items
  • Provide small, testable patches or code snippets with each suggestion

Example use cases

  • Audit dotfiles for leaked API keys, shell-history leaks, or unsafe zsh/vim plugin configs
  • Review Python scripts for exception handling, input validation, and cyclomatic complexity
  • Detect N+1 style issues in config-driven scripts that query services or APIs
  • Suggest performance improvements for startup scripts and plugin managers (zinit, vundle)
  • Validate tmux/vim/zsh config semantics and recommend portability changes

FAQ

What format do review reports use?

Reports are grouped by severity and include file path with line range, a clear description, impact analysis, and a concrete fix or code example.

Can you run automated linters or tests?

I can recommend linters and test commands to run and interpret their output, but I don’t execute toolchains. Share lint/test output for targeted advice.