home / skills / mastra-ai / mastra / code-standards

This skill helps enforce code quality in TypeScript projects by guiding reviews with a structured checklist for critical issues, quality, style, and linting.

npx playbooks add skill mastra-ai/mastra --skill code-standards

Review the files below or copy the command above to add this skill to your agents.

Files (2)
SKILL.md
1.8 KB
---
name: code-standards
description: Code quality standards and style guide for reviewing pull requests
version: 1.0.0
metadata:
  tags:
    - code-review
    - quality
    - style
---

# Code Standards Review

When reviewing code, follow this structured process:

## Step 1: Critical Issues

Check for issues that MUST be fixed before merging:

- Logic bugs and incorrect behavior
- Missing error handling for failure cases
- Race conditions or concurrency issues
- Unhandled edge cases (null, undefined, empty arrays, boundary values)
- Breaking API changes without migration path

## Step 2: Code Quality

Evaluate overall code quality:

- Functions should do one thing and be reasonably sized (< 50 lines preferred)
- Avoid code duplication — look for repeated patterns that should be abstracted
- Use descriptive, meaningful names for variables, functions, and types
- Prefer explicit types over `any` in TypeScript
- Ensure proper use of async/await (no floating promises, proper error propagation)

## Step 3: Style Guide Conformance

Check against the style guide in `references/style-guide.md`:

- Naming conventions
- Code organization and import ordering
- Comment quality (explain "why", not "what")

## Step 4: Linting Flags

Flag these patterns:

- `var` usage (should be `const` or `let`)
- Leftover `console.log` or `debugger` statements
- Commented-out code blocks
- Magic numbers without named constants
- `TODO` or `FIXME` comments without issue references

## Output Format

Structure your feedback as:

1. **Summary**: 1-2 sentence overview of the changes and overall quality
2. **Critical Issues**: Must-fix problems with file path and line numbers
3. **Suggestions**: Improvements that would make the code better
4. **Positive Notes**: Good patterns and decisions worth acknowledging

Overview

This skill provides a clear, actionable code quality and style guide for reviewing pull requests in TypeScript projects. It focuses on catching critical bugs, enforcing maintainable code patterns, and ensuring consistency with a project's style guide. The goal is faster, safer merges and more predictable codebases.

How this skill works

The skill inspects PR diffs for critical issues first (logic bugs, missing error handling, race conditions, and breaking API changes). It then evaluates code quality rules like single-responsibility functions, duplication, and type hygiene. Finally, it checks style-guide conformance and common linting flags, and formats feedback into a four-part review summary.

When to use it

  • During pull request reviews for TypeScript and JavaScript projects
  • When enforcing team-wide style and safety standards before merging
  • As part of CI-assisted code review to catch regressions early
  • When onboarding new contributors to keep code consistent
  • When preparing releases to ensure no breaking changes slip through

Best practices

  • Prioritize critical fixes: block merges for logic, concurrency, and error-handling issues
  • Keep functions focused and under ~50 lines; extract repeated logic into helpers
  • Prefer explicit TypeScript types over any; annotate public APIs
  • Use descriptive names for variables, functions, and types
  • Remove console.log/debugger and commented-out code before merging

Example use cases

  • Automated review assistant that scans PR diffs and lists must-fix issues
  • Manual reviewer checklist to standardize engineering reviews across the team
  • CI job that fails on lint flags, TODOs without issues, or unsafe API changes
  • Onboarding checklist for new contributors to learn team naming and import ordering

FAQ

What constitutes a critical issue?

Critical issues are bugs or omissions that would cause incorrect behavior, uncaught errors, race conditions, unhandled edge cases, or breaking API changes without a migration path.

How should feedback be structured?

Provide a concise Summary, list Critical Issues with file paths and line numbers, offer Suggestions for improvements, and end with Positive Notes to highlight good patterns.