home / skills / nickcrew / claude-cortex / database-design-patterns

database-design-patterns skill

/skills/database-design-patterns

This skill guides designing scalable database schemas and optimizing queries across SQL and NoSQL systems, improving performance and data persistence.

npx playbooks add skill nickcrew/claude-cortex --skill database-design-patterns

Review the files below or copy the command above to add this skill to your agents.

Files (7)
SKILL.md
5.2 KB
---
name: database-design-patterns
description: Database schema design patterns and optimization strategies for relational and NoSQL databases. Use when designing database schemas, optimizing query performance, or implementing data persistence layers at scale.
---

# Database Design Patterns

Expert guidance for designing scalable database schemas, optimizing query performance, and implementing robust data persistence layers across relational and NoSQL databases.

## When to Use This Skill

- Designing database schemas for new applications
- Optimizing slow queries and database performance
- Choosing between normalization and denormalization strategies
- Implementing partitioning, sharding, or replication strategies
- Migrating between database technologies (SQL to NoSQL or vice versa)
- Designing for high availability and disaster recovery
- Implementing caching strategies and read replicas
- Scaling databases horizontally or vertically
- Ensuring data consistency in distributed systems

## Core Concepts

### Data Modeling
Design schemas that reflect business domain, access patterns, and consistency requirements. Balance normalization (data integrity) with denormalization (read performance) based on workload characteristics.

### ACID vs. BASE
- **ACID** (Relational): Atomicity, Consistency, Isolation, Durability - strong guarantees
- **BASE** (NoSQL): Basically Available, Soft state, Eventually consistent - flexibility

### CAP Theorem
Distributed systems choose two of three: Consistency, Availability, Partition Tolerance.

### Polyglot Persistence
Use the right database for each use case: PostgreSQL for transactions, MongoDB for documents, Redis for caching, Elasticsearch for search, Cassandra for time-series, Neo4j for graphs.

## Quick Reference

| Task | Load reference |
| --- | --- |
| Core database principles (ACID, BASE, CAP) | `skills/database-design-patterns/references/core-principles.md` |
| Schema patterns (normalization, star schema, documents) | `skills/database-design-patterns/references/schema-design-patterns.md` |
| Index types and strategies (B-tree, hash, covering) | `skills/database-design-patterns/references/indexing-strategies.md` |
| Partitioning and sharding approaches | `skills/database-design-patterns/references/partitioning-patterns.md` |
| Replication modes (primary-replica, multi-leader) | `skills/database-design-patterns/references/replication-patterns.md` |
| Query optimization and caching | `skills/database-design-patterns/references/query-optimization.md` |

## Workflow

### Phase 1: Requirements Analysis
1. Identify access patterns (read-heavy vs. write-heavy)
2. Determine consistency requirements (strong vs. eventual)
3. Estimate data volume and growth rate
4. Define SLA requirements (latency, availability)

### Phase 2: Schema Design
1. Model entities and relationships
2. Choose normalization level based on workload
3. Design for query patterns, not just storage
4. Consider data distribution strategy (partitioning/sharding)

### Phase 3: Performance Optimization
1. Analyze query execution plans (`EXPLAIN ANALYZE`)
2. Add indexes for frequent queries
3. Implement caching where appropriate
4. Configure connection pooling
5. Monitor and iterate

### Phase 4: Scaling Strategy
1. Implement read replicas for read scaling
2. Consider partitioning for large tables (>100M rows)
3. Plan sharding strategy for horizontal scaling
4. Design for high availability with replication

## Common Mistakes

**Over-normalization**: Too many joins slow down reads. Denormalize for read-heavy workloads.

**Missing indexes**: Analyze query patterns and add indexes for frequent WHERE/JOIN columns.

**Wrong index type**: Use composite indexes with correct column order (equality first, then range).

**Ignoring replication lag**: Handle eventual consistency with read-your-writes pattern.

**Poor partitioning key**: Choose keys that distribute data evenly and align with query patterns.

**N+1 queries**: Use JOINs or batch loading instead of querying in loops.

**Inefficient pagination**: Use keyset pagination instead of OFFSET for large datasets.

**Connection exhaustion**: Implement connection pooling sized for your workload.

## Best Practices

1. **Model for access patterns** - Design schemas around how data will be queried
2. **Index strategically** - Index frequently queried columns, avoid over-indexing
3. **Partition large tables** - Use for tables >100M rows or time-series data
4. **Replicate for reads** - Primary-replica for read scaling, multi-leader for geo-distribution
5. **Optimize queries** - Analyze execution plans, avoid N+1, use proper pagination
6. **Cache hot data** - Application-level caching with appropriate TTLs
7. **Pool connections** - Size connection pools based on workload
8. **Monitor continuously** - Track query performance, index usage, replication lag
9. **Plan for growth** - Design for 3x current load
10. **Choose consistency wisely** - Match consistency level to business requirements

## Resources

**Books**:
- "Designing Data-Intensive Applications" (Kleppmann)
- "High Performance MySQL" (Schwartz)

**Sites**:
- use-the-index-luke.com
- PostgreSQL documentation
- MongoDB documentation

**Tools**:
- EXPLAIN ANALYZE
- pg_stat_statements
- Percona Toolkit
- pt-query-digest

Overview

This skill provides expert guidance for designing scalable database schemas and optimizing performance across relational and NoSQL systems. It focuses on practical patterns for modeling, indexing, partitioning, replication, and caching to meet real-world SLA and growth requirements. Use it to make tradeoffs between consistency, availability, and performance when building or migrating data layers.

How this skill works

The skill analyzes workload characteristics—read vs write patterns, consistency needs, data volume, and query profiles—and recommends schema patterns (normalization, denormalization, star schema, document models). It inspects indexing strategies, partitioning/sharding options, and replication modes and proposes concrete optimizations (indexes, keyset pagination, caching, connection pooling). It also provides a phased workflow from requirements through scaling and a checklist of common mistakes to avoid.

When to use it

  • Designing a new schema for a transactional or analytical app
  • Optimizing slow queries and diagnosing index needs
  • Choosing between SQL and NoSQL or planning a migration
  • Planning partitioning, sharding, or replication for scale
  • Designing high availability, disaster recovery, or geo-distribution

Best practices

  • Model for access patterns rather than only for storage
  • Index strategically: prioritize frequent WHERE/JOIN columns and composite order
  • Partition large tables (e.g., >100M rows) and choose keys that distribute evenly
  • Use read replicas and caching for read scaling; size connection pools appropriately
  • Analyze query plans (EXPLAIN ANALYZE) and avoid N+1 queries with joins or batching
  • Plan for growth (design for ~3x current load) and monitor replication lag and index usage

Example use cases

  • Designing a transactional schema in PostgreSQL with appropriate normalization and indexes
  • Creating a denormalized document model in MongoDB for read-heavy endpoints
  • Implementing time-series partitioning and compaction for high-ingest workloads
  • Adding composite indexes and switching to keyset pagination to fix slow list endpoints
  • Designing a multi-region replication strategy with conflict resolution and eventual consistency handling

FAQ

When should I denormalize instead of normalizing?

Denormalize for read-heavy workloads when joins cause unacceptable latency; keep critical transactional paths normalized to preserve integrity.

How do I pick a partition key?

Choose a key aligned with common query filters that evenly distributes data and avoids hotspotting; consider time-based partitions for time-series.

When is multi-leader replication appropriate?

Use multi-leader for geo-distribution with low write contention and application-level conflict resolution; prefer primary-replica for simpler consistency.