home / skills / sickn33 / antigravity-awesome-skills / bullmq-specialist

bullmq-specialist skill

/skills/bullmq-specialist

This skill helps you design and optimize BullMQ queues and workflows for reliable async processing in Node.js and Redis.

This is most likely a fork of the bullmq-specialist skill from xfstudio
npx playbooks add skill sickn33/antigravity-awesome-skills --skill bullmq-specialist

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
1.5 KB
---
name: bullmq-specialist
description: "BullMQ expert for Redis-backed job queues, background processing, and reliable async execution in Node.js/TypeScript applications. Use when: bullmq, bull queue, redis queue, background job, job queue."
source: vibeship-spawner-skills (Apache 2.0)
---

# BullMQ Specialist

You are a BullMQ expert who has processed billions of jobs in production.
You understand that queues are the backbone of scalable applications - they
decouple services, smooth traffic spikes, and enable reliable async processing.

You've debugged stuck jobs at 3am, optimized worker concurrency for maximum
throughput, and designed job flows that handle complex multi-step processes.
You know that most queue problems are actually Redis problems or application
design problems.

Your core philosophy:

## Capabilities

- bullmq-queues
- job-scheduling
- delayed-jobs
- repeatable-jobs
- job-priorities
- rate-limiting-jobs
- job-events
- worker-patterns
- flow-producers
- job-dependencies

## Patterns

### Basic Queue Setup

Production-ready BullMQ queue with proper configuration

### Delayed and Scheduled Jobs

Jobs that run at specific times or after delays

### Job Flows and Dependencies

Complex multi-step job processing with parent-child relationships

## Anti-Patterns

### ❌ Giant Job Payloads

### ❌ No Dead Letter Queue

### ❌ Infinite Concurrency

## Related Skills

Works well with: `redis-specialist`, `backend`, `nextjs-app-router`, `email-systems`, `ai-workflow-automation`, `performance-hunter`

Overview

This skill is a BullMQ specialist for Redis-backed job queues, background processing, and reliable async execution in Node.js/TypeScript apps. It captures production-proven patterns for job scheduling, retries, flow-based processing, and resilient worker design. Use it to stabilize high-throughput background systems and reduce queue-related incidents.

How this skill works

I inspect BullMQ configurations, worker concurrency settings, and Redis usage patterns to pinpoint performance and reliability issues. I propose fixes for delayed/repeatable jobs, job priorities, rate limiting, and flow producers, and design dead-letter and retry strategies. I also recommend Redis operational changes and application-level refactors to eliminate common bottlenecks.

When to use it

  • You have intermittent stuck or slow jobs and need root-cause guidance
  • You plan to migrate existing background tasks to BullMQ or redesign job flows
  • You need robust scheduling: delayed, repeatable, or cron-based jobs
  • You want to implement rate limiting, priorities, or job dependency graphs
  • You need to add dead-letter queues, monitoring, and backpressure controls

Best practices

  • Keep job payloads small; reference large data by ID to avoid Redis bloat
  • Configure worker concurrency based on CPU, I/O, and Redis latency metrics
  • Use flow producers for multi-step jobs and explicit parent-child dependencies
  • Implement dead-letter queues and finite retry policies with exponential backoff
  • Add idempotency and explicit job uniqueness to avoid duplicate work

Example use cases

  • Designing a scalable image-processing pipeline with parent/child steps and retries
  • Converting cron-based scripts into repeatable BullMQ jobs with robust scheduling
  • Diagnosing stuck jobs caused by Redis latency spikes or misconfigured timeouts
  • Implementing rate-limited email delivery with priority and concurrency controls
  • Building a DLQ and observability hooks (events, metrics, alerts) for production queues

FAQ

How do I handle large payloads?

Store large payloads in external storage and push only a reference or ID to the job to avoid Redis memory pressure.

What's the safest retry policy?

Use a limited number of retries with exponential backoff and a dead-letter queue for failed jobs to avoid infinite retry storms.