home / skills / dkyazzentwatwa / chatgpt-skills / dependency-analyzer

dependency-analyzer skill

/dependency-analyzer

This skill analyzes Python imports and project dependencies to reveal unused imports, generate requirements, and map typical dependency structures.

npx playbooks add skill dkyazzentwatwa/chatgpt-skills --skill dependency-analyzer

Review the files below or copy the command above to add this skill to your agents.

Files (3)
SKILL.md
4.1 KB
---
name: dependency-analyzer
description: Analyze Python imports and dependencies. Use to understand project structure, find unused imports, or generate requirements.txt files.
---

# Dependency Analyzer

Analyze Python file imports and project dependencies.

## Features

- **Import Extraction**: List all imports from Python files
- **Dependency Graph**: Visualize import relationships
- **Unused Detection**: Find unused imports
- **Requirements Generation**: Auto-generate requirements.txt
- **Standard Library Detection**: Separate stdlib from third-party
- **Circular Import Detection**: Find circular dependencies

## Quick Start

```python
from dependency_analyzer import DependencyAnalyzer

analyzer = DependencyAnalyzer()

# Analyze single file
imports = analyzer.analyze_file("main.py")
print(imports)

# Analyze project
result = analyzer.analyze_project("./src")
print(result['third_party'])  # External dependencies
```

## CLI Usage

```bash
# Analyze single file
python dependency_analyzer.py --file main.py

# Analyze project directory
python dependency_analyzer.py --dir ./src

# Generate requirements.txt
python dependency_analyzer.py --dir ./src --requirements --output requirements.txt

# Find unused imports
python dependency_analyzer.py --file main.py --unused

# Show dependency graph
python dependency_analyzer.py --dir ./src --graph

# JSON output
python dependency_analyzer.py --dir ./src --json
```

## API Reference

### DependencyAnalyzer Class

```python
class DependencyAnalyzer:
    def __init__(self)

    # Analysis
    def analyze_file(self, filepath: str) -> dict
    def analyze_project(self, directory: str) -> dict

    # Detection
    def find_unused_imports(self, filepath: str) -> list
    def find_circular_imports(self, directory: str) -> list

    # Generation
    def generate_requirements(self, directory: str) -> list
    def save_requirements(self, deps: list, output: str)

    # Classification
    def is_stdlib(self, module: str) -> bool
    def is_local(self, module: str, directory: str) -> bool
```

## Output Format

### File Analysis
```python
{
    "file": "main.py",
    "imports": [
        {"module": "os", "type": "stdlib", "line": 1},
        {"module": "json", "type": "stdlib", "line": 2},
        {"module": "requests", "type": "third_party", "line": 3},
        {"module": "utils.helpers", "type": "local", "line": 4}
    ],
    "from_imports": [
        {"module": "typing", "names": ["Dict", "List"], "type": "stdlib"},
        {"module": "flask", "names": ["Flask", "request"], "type": "third_party"}
    ]
}
```

### Project Analysis
```python
{
    "directory": "./src",
    "files_analyzed": 15,
    "stdlib": ["os", "sys", "json", "typing", ...],
    "third_party": ["requests", "flask", "pandas", ...],
    "local": ["utils", "models", "config", ...],
    "by_file": {
        "main.py": {...},
        "app.py": {...}
    }
}
```

## Example Workflows

### Generate Requirements
```python
analyzer = DependencyAnalyzer()
deps = analyzer.generate_requirements("./src")
analyzer.save_requirements(deps, "requirements.txt")
```

### Find Unused Imports
```python
analyzer = DependencyAnalyzer()
unused = analyzer.find_unused_imports("main.py")
for imp in unused:
    print(f"Line {imp['line']}: {imp['module']} is unused")
```

### Analyze Project Structure
```python
analyzer = DependencyAnalyzer()
result = analyzer.analyze_project("./myproject")

print("Third-party dependencies:")
for dep in result['third_party']:
    print(f"  - {dep}")

print("\nLocal modules:")
for mod in result['local']:
    print(f"  - {mod}")
```

### Check for Circular Imports
```python
analyzer = DependencyAnalyzer()
circular = analyzer.find_circular_imports("./src")
if circular:
    print("Circular imports detected:")
    for cycle in circular:
        print(f"  {' -> '.join(cycle)}")
```

## Module Classification

| Type | Description | Example |
|------|-------------|---------|
| `stdlib` | Python standard library | `os`, `sys`, `json` |
| `third_party` | External packages | `requests`, `pandas` |
| `local` | Project modules | `utils.helpers` |

## Dependencies

No external dependencies - uses Python standard library (ast module).

Overview

This skill analyzes Python imports and project dependencies to help you understand code structure and external requirements. It extracts imports, classifies modules as standard library, third-party, or local, and detects unused or circular imports. It can also generate a requirements.txt list for packaging or deployment.

How this skill works

The analyzer parses Python files using the AST to extract import statements and from-imports, records line numbers, and classifies each module. It aggregates results across a project to build a dependency graph, separate stdlib vs third-party, locate unused imports, and detect circular import cycles. It can output JSON, draw a graph, or produce a requirements.txt-style list.

When to use it

  • Inventory dependencies before packaging or deploying a Python project
  • Find and remove unused imports to reduce maintenance and linting noise
  • Generate a requirements.txt when dependencies are undocumented
  • Detect circular imports causing runtime import errors
  • Understand module relationships when onboarding new contributors

Best practices

  • Run analysis on the full project directory to get accurate third-party and local classifications
  • Combine unused-import detection with automated tests to avoid removing used-but-dynamic imports
  • Review generated requirements against setup or Pipfile to ensure correct versions
  • Use the dependency graph output to plan refactors that reduce coupling
  • Exclude generated or vendored code directories to avoid false positives

Example use cases

  • Quickly generate requirements.txt from a source tree before creating a Docker image
  • Scan a legacy codebase to list third-party packages and identify which modules depend on them
  • Detect and report circular import cycles that crash at runtime
  • Run per-file checks in CI to fail builds when unused imports are introduced
  • Produce a by-file breakdown for architecture reviews or dependency audits

FAQ

Does this require external packages to run?

No. The analyzer relies on Python's standard library (ast and related modules) so it has no additional runtime dependencies.

Will it find imports added dynamically at runtime?

No. The analyzer inspects static import statements. Dynamically imported modules (importlib, __import__, exec) may not be detected and should be reviewed manually or via tests.