home / skills / partme-ai / full-stack-skills / postgresql

postgresql skill

/skills/postgresql

This skill helps you master PostgreSQL by guiding complex queries, performance tuning, JSON support, and full-text search for robust database work.

npx playbooks add skill partme-ai/full-stack-skills --skill postgresql

Review the files below or copy the command above to add this skill to your agents.

Files (2)
SKILL.md
703 B
---
name: postgresql
description: Provides comprehensive guidance for PostgreSQL database including SQL syntax, advanced features, JSON support, full-text search, and performance tuning. Use when the user asks about PostgreSQL, needs to work with PostgreSQL features, write complex queries, or optimize PostgreSQL databases.
license: Complete terms in LICENSE.txt
---

## When to use this skill

Use this skill whenever the user wants to:
- [待完善:根据具体工具添加使用场景]

## How to use this skill

[待完善:根据具体工具添加使用指南]

## Best Practices

[待完善:根据具体工具添加最佳实践]

## Keywords

[待完善:根据具体工具添加关键词]

Overview

This skill provides practical, hands-on guidance for working with PostgreSQL databases. It covers SQL syntax, advanced features like window functions and CTEs, JSON support, full-text search, and performance tuning. Use it to write complex queries, design schemas, and diagnose performance problems. The guidance focuses on actionable examples and best practices for production systems.

How this skill works

The skill inspects user questions and database scenarios to produce targeted SQL examples, configuration recommendations, and troubleshooting steps. It explains query plans, index strategies, and storage settings, and generates sample SQL or psql commands you can run directly. Where relevant, it outlines risks and trade-offs for options such as partitioning, replication, and vacuum settings.

When to use it

  • Writing or optimizing complex SQL queries (joins, window functions, CTEs).
  • Modeling schemas and choosing data types including JSON/JSONB.
  • Implementing or tuning full-text search with tsvector and GIN indexes.
  • Diagnosing slow queries using EXPLAIN/ANALYZE and indexing advice.
  • Configuring performance settings (work_mem, shared_buffers, autovacuum).

Best practices

  • Prefer explicit column lists and parameterized queries to avoid inefficiency and SQL injection.
  • Use JSONB for semi-structured data and index frequently queried keys with GIN if needed.
  • Analyze query plans with EXPLAIN (ANALYZE, BUFFERS) before adding indexes.
  • Keep autovacuum tuned for your workload; monitor bloat and VACUUM / REINDEX when necessary.
  • Use connection pooling (pgbouncer) for high-concurrency applications.

Example use cases

  • Convert a slow JOIN into an indexed join and show before/after EXPLAIN output.
  • Design a schema that mixes relational tables with JSONB for flexible attributes.
  • Implement full-text search with weighted tsvector fields and appropriate GIN indexes.
  • Tune work_mem and shared_buffers for a reporting query that spills to disk.
  • Set up basic logical replication for near-real-time read scaling.

FAQ

When should I use JSONB instead of normalized tables?

Use JSONB for flexible, infrequently queried attributes or when schema changes are frequent. Use normalized tables when you need strong relational integrity, frequent indexed lookups, or complex joins.

How do I find the cause of a slow query?

Run EXPLAIN (ANALYZE, BUFFERS) to see actual execution times and I/O. Check for missing indexes, sequential scans on large tables, and excessive sorts or hash spills; then add indexes or rewrite the query.