home / skills / bdambrosio / cognitive_workbench / test-json-sql-query-web

test-json-sql-query-web skill

/src/saved_plans/test-json-sql-query-web

This skill helps you validate and explore JSON SQL primitives with search-web output by testing project, pluck, filter-structured, and sort operations.

npx playbooks add skill bdambrosio/cognitive_workbench --skill test-json-sql-query-web

Review the files below or copy the command above to add this skill to your agents.

Files (2)
SKILL.md
505 B
---
name: test-json-sql-search-web
description: Test JSON SQL primitives with search-web output
type: plan
manual_only: true
parameters: []
---

# test-json-sql-search-web

Tests JSON SQL primitives (project, pluck, filter-structured, sort) with real search-web output.

## What it tests
- search-web returns Collection of Notes
- project extracts metadata.uri, metadata.domain, char_count
- pluck extracts first URI
- filter-structured filters by char_count > 100
- sort orders by char_count descending

Overview

This skill tests JSON SQL primitives against real search-web output to validate extraction and transformation logic. It focuses on verifying project, pluck, filter-structured, and sort behaviors using Collections of Notes returned by search-web. The goal is to ensure reliable metadata extraction and ordering for downstream processing.

How this skill works

The skill runs a pipeline that inspects Collection objects returned by search-web and applies JSON SQL primitives. It projects selected fields (metadata.uri, metadata.domain, char_count), plucks the first URI from results, filters records where char_count > 100, and sorts the remaining items by char_count in descending order. The output is a small transformed dataset suitable for assertions in tests.

When to use it

  • Validate JSON SQL project, pluck, filter-structured, and sort primitives against real web search output.
  • Verify metadata extraction from search-web Note collections before integrating with downstream systems.
  • Catch regressions in how search-web populates metadata.uri, metadata.domain, or char_count.
  • Generate deterministic sorted samples for performance or UI tests that depend on char_count ordering.
  • Create minimal fixtures from live results for unit or integration tests.

Best practices

  • Run tests against representative search-web responses to cover typical Note shapes and edge cases.
  • Assert both presence and type of projected fields (uri as string, domain as string, char_count as numeric).
  • Include cases with char_count below and above 100 to validate filtering logic.
  • Use deterministic inputs or mock responses when exact ordering is required for assertions.
  • Log intermediate transformed results to simplify debugging when a primitive misbehaves.

Example use cases

  • Project metadata.uri, metadata.domain, and char_count from search-web Note collections for indexing.
  • Pluck the first URI from search hits to seed a follow-up crawl or preview request.
  • Filter results to only include documents with sufficient length (char_count > 100) for content analysis.
  • Sort search results by char_count descending to prioritize longer documents for summarization.
  • Combine primitives to produce compact, testable fixtures for CI pipelines.

FAQ

What does the filter-structured primitive check here?

It selects records where the char_count field is greater than 100, ensuring only sufficiently long documents pass.

Why sort by char_count descending?

Sorting by char_count descending prioritizes longer documents, which is useful for tests that focus on richer content.