home / skills / bdambrosio / cognitive_workbench / assess
This skill assesses text content against natural language predicates using chunked evaluation to return true if any segment matches.
npx playbooks add skill bdambrosio/cognitive_workbench --skill assessReview the files below or copy the command above to add this skill to your agents.
---
name: assess
description: Boolean test of text content against a natural language predicate. Features auto-chunking for long texts (returns "true" if ANY chunk matches).
type: python
flattens_collections: true
---
# assess
Semantic boolean testing. Evaluates natural language predicates against text content using LLM.
## Input
- `target`: String content to test (empty inputs return "false")
- `predicate`: Natural language question (e.g., "mentions specific dates?", "is critical of the author?")
## Output
Returns string `"true"` or `"false"` (lowercase string, not JSON boolean).
## Behavior
- **Auto-Chunking**: Texts >16k chars are split into boundary-aware chunks
- **OR Aggregation**: Returns `"true"` on first matching chunk (short-circuit), `"false"` only if all chunks fail
- **Fallback**: Returns `"false"` on ambiguous LLM responses
## Planning Notes
- Phrase predicates to detect *presence* rather than global summary (chunks are evaluated in isolation)
- Good: "Contains mention of inflation?"
- Risky: "Is the main topic inflation?"
- Every chunk requires an LLM call
## Example
```json
{"type":"assess","target":"$my_note","predicate":"is urgent?","out":"$urgency"}
```
This skill performs semantic boolean tests on text using natural language predicates. It returns the lowercase string "true" or "false" and is optimized to handle very long inputs via safe auto-chunking. The design short-circuits on the first matching chunk to keep costs and latency low.
You provide a target text and a predicate phrased as a presence check. The engine splits inputs longer than the chunk threshold into boundary-aware chunks and evaluates each chunk with an LLM. If any chunk satisfies the predicate the skill immediately returns "true"; if all chunks fail or the LLM is ambiguous it returns "false".
What does the skill return for empty input?
Empty targets always return "false".
How are long texts handled?
Texts above the chunk threshold are split into boundary-aware chunks and evaluated independently; any matching chunk yields "true".
Is the result a boolean type?
No. Results are the lowercase strings "true" or "false".