home / skills / bdambrosio / cognitive_workbench / test-json-sql-setup

test-json-sql-setup skill

/src/saved_plans/test-json-sql-setup

This skill creates test data for JSON SQL primitives by provisioning papers and authors collections and enabling binding verification.

npx playbooks add skill bdambrosio/cognitive_workbench --skill test-json-sql-setup

Review the files below or copy the command above to add this skill to your agents.

Files (2)
SKILL.md
708 B
---
name: test-json-sql-setup
description: Creates test data for JSON SQL primitive tests
manual_only: true
---

# Test JSON SQL Setup

Creates two test collections for testing JSON SQL primitives:

## $papers Collection (4 items)
- Paper A: Deep Learning, 2020, 100 citations, ICML
- Paper B: Transformers, 2021, 250 citations, NeurIPS  
- Paper C: GPT-4 Analysis, 2023, 50 citations, JMLR
- Paper D: Scaling Laws, 2022, 180 citations, ICML

## $authors Collection (3 items)
- Author A: Alice, MIT
- Author B: Bob, Stanford
- Author E: Eve, Berkeley (no matching paper)

## Usage
Execute this plan first, then run other test-json-sql-* plans.
Check Bindings tab to verify $papers and $authors are created.

Overview

This skill creates two small test collections used to validate JSON SQL primitives. It seeds a $papers collection with four sample papers and a $authors collection with three sample authors to support query and join tests. Run this setup before other test-json-sql-* plans to ensure consistent test data.

How this skill works

When executed, the plan inserts four documents into $papers and three documents into $authors. Each paper document includes title, year, citations, and venue fields; each author document includes name and affiliation. After running, check the Bindings tab to confirm both collections exist and contain the expected items.

When to use it

  • Before running any JSON SQL primitive tests that assume sample data
  • When developing or debugging queries involving joins between papers and authors
  • To reproduce test failures that depend on a fixed dataset
  • When you need a consistent, minimal dataset for performance or correctness checks

Best practices

  • Run this setup plan first to guarantee predictable test inputs
  • Verify the Bindings tab to confirm collections and document counts
  • Keep tests idempotent: teardown or reset collections between runs if needed
  • Use this dataset for unit tests and small integration scenarios rather than full-scale benchmarks

Example use cases

  • Test SELECT, WHERE, and ORDER BY queries against a known paper list
  • Validate JOIN logic between $papers and $authors using author affiliation
  • Exercise aggregation functions like COUNT and SUM over citations
  • Reproduce and debug a failing JSON SQL primitive using the seeded dataset
  • Demonstrate query examples in documentation or tutorials with predictable output

FAQ

What exact documents are created?

$papers: four documents (Deep Learning, Transformers, GPT-4 Analysis, Scaling Laws) with year, citations, and venue. $authors: three documents (Alice at MIT, Bob at Stanford, Eve at Berkeley).

Do all authors link to papers?

No. Eve has no matching paper to allow testing of outer joins and unmatched rows.