home / skills / vudovn / antigravity-kit / code-review-checklist
This skill helps you perform comprehensive code-review checks for correctness, security, performance, and quality, ensuring robust TypeScript code.
npx playbooks add skill vudovn/antigravity-kit --skill code-review-checklistReview the files below or copy the command above to add this skill to your agents.
---
name: code-review-checklist
description: Code review guidelines covering code quality, security, and best practices.
allowed-tools: Read, Glob, Grep
---
# Code Review Checklist
## Quick Review Checklist
### Correctness
- [ ] Code does what it's supposed to do
- [ ] Edge cases handled
- [ ] Error handling in place
- [ ] No obvious bugs
### Security
- [ ] Input validated and sanitized
- [ ] No SQL/NoSQL injection vulnerabilities
- [ ] No XSS or CSRF vulnerabilities
- [ ] No hardcoded secrets or sensitive credentials
- [ ] **AI-Specific:** Protection against Prompt Injection (if applicable)
- [ ] **AI-Specific:** Outputs are sanitized before being used in critical sinks
### Performance
- [ ] No N+1 queries
- [ ] No unnecessary loops
- [ ] Appropriate caching
- [ ] Bundle size impact considered
### Code Quality
- [ ] Clear naming
- [ ] DRY - no duplicate code
- [ ] SOLID principles followed
- [ ] Appropriate abstraction level
### Testing
- [ ] Unit tests for new code
- [ ] Edge cases tested
- [ ] Tests readable and maintainable
### Documentation
- [ ] Complex logic commented
- [ ] Public APIs documented
- [ ] README updated if needed
## AI & LLM Review Patterns (2025)
### Logic & Hallucinations
- [ ] **Chain of Thought:** Does the logic follow a verifiable path?
- [ ] **Edge Cases:** Did the AI account for empty states, timeouts, and partial failures?
- [ ] **External State:** Is the code making safe assumptions about file systems or networks?
### Prompt Engineering Review
```markdown
// ā Vague prompt in code
const response = await ai.generate(userInput);
// ā
Structured & Safe prompt
const response = await ai.generate({
system: "You are a specialized parser...",
input: sanitize(userInput),
schema: ResponseSchema
});
```
## Anti-Patterns to Flag
```typescript
// ā Magic numbers
if (status === 3) { ... }
// ā
Named constants
if (status === Status.ACTIVE) { ... }
// ā Deep nesting
if (a) { if (b) { if (c) { ... } } }
// ā
Early returns
if (!a) return;
if (!b) return;
if (!c) return;
// do work
// ā Long functions (100+ lines)
// ā
Small, focused functions
// ā any type
const data: any = ...
// ā
Proper types
const data: UserData = ...
```
## Review Comments Guide
```
// Blocking issues use š“
š“ BLOCKING: SQL injection vulnerability here
// Important suggestions use š”
š” SUGGESTION: Consider using useMemo for performance
// Minor nits use š¢
š¢ NIT: Prefer const over let for immutable variable
// Questions use ā
ā QUESTION: What happens if user is null here?
```
This skill provides a practical code review checklist focused on correctness, security, performance, code quality, testing, and documentation. It includes AI/LLM-specific review patterns and common anti-patterns to flag. The guidance is concise and actionable to speed up consistent reviews across TypeScript and similar projects.
The skill inspects pull requests and code diffs against a short Quick Review Checklist (correctness, security, performance, tests, docs). It highlights AI/LLM concerns like prompt injection, hallucination risks, and unsafe output usage. It also calls out anti-patterns, suggests reviewer comment conventions, and recommends concrete fixes for blocking, important, and minor issues.
How do I mark the severity of a finding?
Use a simple convention: š“ BLOCKING for security/ correctness issues, š” SUGGESTION for important improvements, š¢ NIT for minor style fixes, and ā QUESTION for clarifying inquiries.
What extra checks apply to AI/LLM code?
Validate prompts for structure, sanitize user inputs, use schemas for outputs, handle hallucinations by grounding with authoritative data, and avoid piping raw model output into critical sinks.