home / skills / vudovn / antigravity-kit / code-review-checklist

code-review-checklist skill

/.agent/skills/code-review-checklist

This skill helps you perform comprehensive code-review checks for correctness, security, performance, and quality, ensuring robust TypeScript code.

npx playbooks add skill vudovn/antigravity-kit --skill code-review-checklist

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
2.5 KB
---
name: code-review-checklist
description: Code review guidelines covering code quality, security, and best practices.
allowed-tools: Read, Glob, Grep
---

# Code Review Checklist

## Quick Review Checklist

### Correctness
- [ ] Code does what it's supposed to do
- [ ] Edge cases handled
- [ ] Error handling in place
- [ ] No obvious bugs

### Security
- [ ] Input validated and sanitized
- [ ] No SQL/NoSQL injection vulnerabilities
- [ ] No XSS or CSRF vulnerabilities
- [ ] No hardcoded secrets or sensitive credentials
- [ ] **AI-Specific:** Protection against Prompt Injection (if applicable)
- [ ] **AI-Specific:** Outputs are sanitized before being used in critical sinks

### Performance
- [ ] No N+1 queries
- [ ] No unnecessary loops
- [ ] Appropriate caching
- [ ] Bundle size impact considered

### Code Quality
- [ ] Clear naming
- [ ] DRY - no duplicate code
- [ ] SOLID principles followed
- [ ] Appropriate abstraction level

### Testing
- [ ] Unit tests for new code
- [ ] Edge cases tested
- [ ] Tests readable and maintainable

### Documentation
- [ ] Complex logic commented
- [ ] Public APIs documented
- [ ] README updated if needed

## AI & LLM Review Patterns (2025)

### Logic & Hallucinations
- [ ] **Chain of Thought:** Does the logic follow a verifiable path?
- [ ] **Edge Cases:** Did the AI account for empty states, timeouts, and partial failures?
- [ ] **External State:** Is the code making safe assumptions about file systems or networks?

### Prompt Engineering Review
```markdown
// āŒ Vague prompt in code
const response = await ai.generate(userInput);

// āœ… Structured & Safe prompt
const response = await ai.generate({
  system: "You are a specialized parser...",
  input: sanitize(userInput),
  schema: ResponseSchema
});
```

## Anti-Patterns to Flag

```typescript
// āŒ Magic numbers
if (status === 3) { ... }

// āœ… Named constants
if (status === Status.ACTIVE) { ... }

// āŒ Deep nesting
if (a) { if (b) { if (c) { ... } } }

// āœ… Early returns
if (!a) return;
if (!b) return;
if (!c) return;
// do work

// āŒ Long functions (100+ lines)
// āœ… Small, focused functions

// āŒ any type
const data: any = ...

// āœ… Proper types
const data: UserData = ...
```

## Review Comments Guide

```
// Blocking issues use šŸ”“
šŸ”“ BLOCKING: SQL injection vulnerability here

// Important suggestions use 🟔
🟔 SUGGESTION: Consider using useMemo for performance

// Minor nits use 🟢
🟢 NIT: Prefer const over let for immutable variable

// Questions use ā“
ā“ QUESTION: What happens if user is null here?
```

Overview

This skill provides a practical code review checklist focused on correctness, security, performance, code quality, testing, and documentation. It includes AI/LLM-specific review patterns and common anti-patterns to flag. The guidance is concise and actionable to speed up consistent reviews across TypeScript and similar projects.

How this skill works

The skill inspects pull requests and code diffs against a short Quick Review Checklist (correctness, security, performance, tests, docs). It highlights AI/LLM concerns like prompt injection, hallucination risks, and unsafe output usage. It also calls out anti-patterns, suggests reviewer comment conventions, and recommends concrete fixes for blocking, important, and minor issues.

When to use it

  • During PR reviews to ensure baseline quality before merge
  • When auditing code for security and injection risks (including AI-specific threats)
  • When validating performance hotspots and unnecessary complexity
  • When adding or modifying business logic to verify edge cases and error handling
  • When introducing AI/LLM components to check prompts, schemas, and sanitization

Best practices

  • Verify correctness with clear unit tests and edge case coverage
  • Sanitize inputs and avoid hardcoded secrets; treat AI outputs as untrusted until validated
  • Prefer named constants, small focused functions, and early returns to reduce nesting
  • Ensure proper typing in TypeScript; avoid any and prefer explicit interfaces
  • Document public APIs, comment complex logic, and update READMEs when behavior changes

Example use cases

  • Review new feature PRs for edge cases, error paths, and tests
  • Audit code that integrates an LLM: check prompt structure, input sanitization, and output handling
  • Spot and replace anti-patterns like magic numbers, deep nesting, and long functions
  • Assess performance by looking for N+1 queries, redundant loops, and missing caching
  • Evaluate security posture: check for SQL/NoSQL injection, XSS/CSRF, and exposed credentials

FAQ

How do I mark the severity of a finding?

Use a simple convention: šŸ”“ BLOCKING for security/ correctness issues, 🟔 SUGGESTION for important improvements, 🟢 NIT for minor style fixes, and ā“ QUESTION for clarifying inquiries.

What extra checks apply to AI/LLM code?

Validate prompts for structure, sanitize user inputs, use schemas for outputs, handle hallucinations by grounding with authoritative data, and avoid piping raw model output into critical sinks.