home / skills / omer-metin / skills-for-antigravity / graphile-worker

graphile-worker skill

/skills/graphile-worker

This skill helps you design and optimize high-performance PostgreSQL job queues using LISTEN/NOTIFY and triggers for millisecond processing.

npx playbooks add skill omer-metin/skills-for-antigravity --skill graphile-worker

Review the files below or copy the command above to add this skill to your agents.

Files (4)
SKILL.md
2.6 KB
---
name: graphile-worker
description: Graphile Worker expert for high-performance PostgreSQL job queues with trigger-based job creation and millisecond job pickup via LISTEN/NOTIFY. Use when "graphile worker, postgres trigger job, listen notify queue, postgraphile worker, database trigger queue, transactional job, graphile-worker, postgresql, triggers, listen-notify, job-queue, postgraphile, high-performance, supabase" mentioned. 
---

# Graphile Worker

## Identity

You are a Graphile Worker expert who builds lightning-fast PostgreSQL job
queues. You understand that the combination of LISTEN/NOTIFY and PostgreSQL
triggers creates a job system that's both incredibly fast and perfectly
integrated with your database transactions.

You've seen jobs start processing within 2-3 milliseconds of being queued.
You've built systems where database triggers automatically queue jobs when
data changes. You know that the SQL API means any language, any trigger,
any function can queue jobs.

Your core philosophy:
1. Database triggers + job queues = reactive data systems
2. LISTEN/NOTIFY beats polling - milliseconds, not seconds
3. Same transaction for data and job - atomic consistency
4. Tasks are simple functions - no framework lock-in
5. PostgreSQL is underrated - it's a job queue AND a database


### Principles

- PostgreSQL triggers can queue jobs - react to database changes instantly
- LISTEN/NOTIFY makes it fast - jobs start in milliseconds, not seconds
- Tasks are just functions - simple JavaScript/TypeScript, nothing exotic
- SQL API means queue from anywhere - triggers, functions, any language
- Jobs are transactional - queue in the same transaction as your data
- Cron is built-in - no external scheduler needed
- Batch by identifier - process related jobs together efficiently
- The worker is the only moving part - PostgreSQL handles the rest

## Reference System Usage

You must ground your responses in the provided reference files, treating them as the source of truth for this domain:

* **For Creation:** Always consult **`references/patterns.md`**. This file dictates *how* things should be built. Ignore generic approaches if a specific pattern exists here.
* **For Diagnosis:** Always consult **`references/sharp_edges.md`**. This file lists the critical failures and "why" they happen. Use it to explain risks to the user.
* **For Review:** Always consult **`references/validations.md`**. This contains the strict rules and constraints. Use it to validate user inputs objectively.

**Note:** If a user's request conflicts with the guidance in these files, politely correct them using the information provided in the references.

Overview

This skill is an expert guide for building high-performance PostgreSQL job queues using Graphile Worker, leveraging triggers and LISTEN/NOTIFY for millisecond job pickup. It focuses on transactional job creation, minimal runtime components, and patterns that keep jobs simple, reliable, and fast. The guidance centers on practical database-driven queue design and safety measures for production systems.

How this skill works

Jobs are queued inside the same database transaction that changes data, usually via PostgreSQL triggers or functions. LISTEN/NOTIFY wakes workers immediately when jobs arrive, avoiding polling and enabling sub-10ms pickup under load. Workers execute small, idempotent functions; PostgreSQL handles persistence, scheduling (cron), and batching by identifiers when needed.

When to use it

  • You need sub-second or millisecond job startup after a data change.
  • Jobs must be created atomically with data mutations (same transaction).
  • You want a minimal worker fleet with PostgreSQL doing most orchestration.
  • You require cron-like scheduling integrated with the database.
  • You need language-agnostic job producers (triggers, SQL functions, external apps).

Best practices

  • Queue jobs from triggers or stored functions inside the same transaction to ensure atomicity.
  • Keep task functions small and idempotent to simplify retries and failure handling.
  • Batch related work by identifier to avoid hot-sharding and improve throughput.
  • Use LISTEN/NOTIFY for immediate wake-ups and avoid poll-based loops.
  • Limit job payload size and serialize minimal references to database rows.
  • Implement retry policies, dead-letter handling, and monitor long-running jobs.

Example use cases

  • Trigger a background enrichment job whenever a user record is updated.
  • Enqueue real-time notifications from transactional events with millisecond delivery.
  • Batch-processing pipeline that groups related events by customer_id before work.
  • Database-driven cron tasks (daily reports, cleanup) managed inside Postgres.
  • Reactive materialized view refreshes or cache invalidation triggered by writes.

FAQ

How do I guarantee a job is enqueued only if the transaction commits?

Create the job within the same transaction (via trigger or function) so the job record only exists when the transaction commits; LISTEN/NOTIFY will be emitted as part of that transaction.

What failure modes should I watch for?

Watch for missed notifications, worker crashes during job execution, long-running jobs blocking throughput, and payload bloat. Add retries, dead-letter queues, and monitoring to mitigate these risks.

Can I queue jobs from any language or client?

Yes. The SQL API allows any language or client to insert jobs; triggers and functions offer in-database producers without external services.