home / skills / cloudai-x / claude-workflow-v2 / analyzing-projects

analyzing-projects skill

/skills/analyzing-projects

This skill helps you rapidly onboard to a new codebase by analyzing structure, tech stack, patterns, and conventions.

npx playbooks add skill cloudai-x/claude-workflow-v2 --skill analyzing-projects

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
3.5 KB
---
name: analyzing-projects
description: Analyzes codebases to understand structure, tech stack, patterns, and conventions. Use when onboarding to a new project, exploring unfamiliar code, or when asked "how does this work?" or "what's the architecture?"
---

# Analyzing Projects

## Project Analysis Workflow

Copy this checklist and track progress:

```
Project Analysis Progress:
- [ ] Step 1: Quick overview (README, root files)
- [ ] Step 2: Detect tech stack
- [ ] Step 3: Map project structure
- [ ] Step 4: Identify key patterns
- [ ] Step 5: Find development workflow
- [ ] Step 6: Generate summary report
```

## Step 1: Quick Overview

```bash
# Check for common project markers
ls -la
cat README.md 2>/dev/null | head -50
```

## Step 2: Tech Stack Detection

### Package Managers & Dependencies
- `package.json` → Node.js/JavaScript/TypeScript
- `requirements.txt` / `pyproject.toml` / `setup.py` → Python
- `go.mod` → Go
- `Cargo.toml` → Rust
- `pom.xml` / `build.gradle` → Java
- `Gemfile` → Ruby

### Frameworks (from dependencies)
- React, Vue, Angular, Next.js, Nuxt
- Express, FastAPI, Django, Flask, Rails
- Spring Boot, Gin, Echo

### Infrastructure
- `Dockerfile`, `docker-compose.yml` → Containerized
- `kubernetes/`, `k8s/` → Kubernetes
- `terraform/`, `.tf` files → IaC
- `serverless.yml` → Serverless Framework
- `.github/workflows/` → GitHub Actions

## Step 3: Project Structure Analysis

Present as a tree with annotations:
```
project/
├── src/              # Source code
│   ├── components/   # UI components (React/Vue)
│   ├── services/     # Business logic
│   ├── models/       # Data models
│   └── utils/        # Shared utilities
├── tests/            # Test files
├── docs/             # Documentation
└── config/           # Configuration
```

## Step 4: Key Patterns Identification

Look for and report:
- **Architecture**: Monolith, Microservices, Serverless, Monorepo
- **API Style**: REST, GraphQL, gRPC, tRPC
- **State Management**: Redux, Zustand, MobX, Context
- **Database**: SQL, NoSQL, ORM used
- **Authentication**: JWT, OAuth, Sessions
- **Testing**: Jest, Pytest, Go test, etc.

## Step 5: Development Workflow

Check for:
- `.eslintrc`, `.prettierrc` → Linting/Formatting
- `.husky/` → Git hooks
- `Makefile` → Build commands
- `scripts/` in package.json → NPM scripts

## Step 6: Output Format

Generate a summary using this template:

```markdown
# Project: [Name]

## Overview
[1-2 sentence description]

## Tech Stack
| Category | Technology |
|----------|------------|
| Language | TypeScript |
| Framework | Next.js 14 |
| Database | PostgreSQL |
| ...      | ...        |

## Architecture
[Description with simple ASCII diagram if helpful]

## Key Directories
- `src/` - [purpose]
- `lib/` - [purpose]

## Entry Points
- Main: `src/index.ts`
- API: `src/api/`
- Tests: `npm test`

## Conventions
- [Naming conventions]
- [File organization patterns]
- [Code style preferences]

## Quick Commands
| Action | Command |
|--------|---------|
| Install | `npm install` |
| Dev | `npm run dev` |
| Test | `npm test` |
| Build | `npm run build` |
```

## Analysis Validation

After completing analysis, verify:

```
Analysis Validation:
- [ ] All major directories explained
- [ ] Tech stack accurately identified
- [ ] Entry points documented
- [ ] Development commands verified working
- [ ] No assumptions made without evidence
```

If any items cannot be verified, note them as "needs clarification" in the report.

Overview

This skill analyzes codebases to quickly surface structure, technology choices, conventions, and developer workflows. It helps engineers onboard faster, answer "how does this work?", and produce a concise project summary that teams can act on.

How this skill works

The analyzer scans root files and manifests to detect package managers, frameworks, and infra markers. It maps directory layout, locates entry points and tests, detects architectural patterns (monolith, microservices, serverless), and extracts development commands and tooling. The output is a structured summary that flags uncertain findings as "needs clarification."

When to use it

  • Onboarding to a new repository to get a fast orientation
  • Exploring unfamiliar code before making changes or reviews
  • Preparing design or migration proposals that depend on architecture
  • Answering questions like "what's the tech stack?" or "where is the API implemented?"
  • Auditing a project to document conventions and developer workflow

Best practices

  • Start with README and root manifests to avoid false assumptions
  • Verify detected tools by opening actual files (Dockerfile, pyproject, package.json)
  • Annotate the project tree with purposes for each directory
  • Mark unverifiable items explicitly as "needs clarification" in the summary
  • Surface commands (install, dev, test, build) and confirm they run when possible

Example use cases

  • Generate a one-page summary for a sprint kickoff with tech stack and entry points
  • Produce a checklist for a new hire to follow during repository onboarding
  • Create a migration plan by identifying frameworks, ORMs, and deployment infra
  • Audit repos to standardize linting, testing, and CI configurations
  • Quickly prepare context for code review by listing key directories and patterns

FAQ

What if the analyzer can't find an entry point?

It marks entry points as "needs clarification" and lists likely candidates (main scripts, server files, or package.json scripts) for manual verification.

How accurate is tech detection?

Detection is based on manifest files and common markers (package.json, pyproject.toml, Dockerfile). If files are missing or customized, results should be validated and will be flagged when uncertain.