home / skills / phrazzld / claude-config / moneta-ingest

moneta-ingest skill

/skills/moneta-ingest

This skill parses new financial documents into Moneta, validating outputs and producing reconciliation summaries to highlight deltas.

npx playbooks add skill phrazzld/claude-config --skill moneta-ingest

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
1.3 KB
---
name: moneta-ingest
description: |
  Parse new financial documents into Moneta. Detect type, run parser, validate output, summarize reconciliation.
user-invocable: true
effort: high
---

# /moneta-ingest

Parse new financial documents into Moneta.

## Steps

1. Scan `source/` for new files and record expected sources and date ranges.
2. Detect document type from filename prefix. If unclear, sniff content headers or PDF table titles.
3. Map type to parser and run it.
4. Validate parser outputs: count, totals, date range, no duplicate IDs.
5. Update aggregates and lots.
6. Emit reconciliation summary with deltas and any warnings.

Type map:

```
bofa       -> pnpm parse:bofa
river      -> pnpm parse:river
strike     -> pnpm parse:strike
cashapp    -> pnpm parse:all   (includes cashapp PDF parsing)
robinhood  -> pnpm parse:all
w2         -> pnpm parse:all
charitable -> pnpm parse:all
```

## Examples

```bash
# Parse everything and rebuild aggregates
pnpm parse:all
```

```bash
# Parse only BofA CSVs
pnpm parse:bofa
```

## References

- `source/`
- `normalized/transactions.json`
- `normalized/cost-basis.json`
- `normalized/accounts.json`
- `scripts/parse-all.ts`
- `scripts/parse-bofa.ts`
- `scripts/parse-river.ts`
- `scripts/parse-strike.ts`
- `scripts/parse-cashapp.ts`
- `scripts/parse-robinhood.ts`
- `scripts/parse-w2.ts`
- `scripts/parse-charitable.ts`

Overview

This skill ingests new financial documents and parses them into Moneta for downstream reconciliation and reporting. It detects document types, runs the appropriate parser, validates outputs, updates aggregates, and emits a concise reconciliation summary with any warnings. The result is normalized transactions, cost basis, and account data ready for analysis.

How this skill works

The skill scans a configured source directory for new files and records expected sources and date ranges. It identifies document type from filename prefixes or, when needed, by inspecting content headers and PDF table titles, then maps that type to a specific parser and executes it. After parsing, it validates outputs by checking record counts, totals, date ranges, and duplicate IDs, updates aggregate and lot data, and produces a reconciliation summary showing deltas and warnings.

When to use it

  • When new bank, broker, or payment service statements arrive in source/
  • Before running financial reports or tax calculations to ensure data is up to date
  • When importing mixed file types (CSV, PDF) that need content-level detection
  • To validate parser outputs and detect duplicates, missing ranges, or unexpected totals
  • When you need an automated reconciliation summary after ingesting files

Best practices

  • Keep incoming files organized in source/ with consistent filename prefixes to improve detection accuracy
  • Review reconciliation summaries and warnings immediately to catch parsing errors early
  • Run targeted parsers (e.g., BofA) for incremental imports to speed processing
  • Monitor normalized/transactions.json, cost-basis.json, and accounts.json for unexpected deltas after ingest
  • Add sample documents for uncommon sources so content sniffing can be tuned

Example use cases

  • Daily intake of bank CSVs and payment PDFs to keep transactions current
  • Importing broker statements (Robinhood, River) and validating cost basis before tax reporting
  • Processing payroll forms (W2) alongside charitable and donation records for comprehensive year-end reconciliation
  • Selective reparse of a single source (pnpm parse:bofa) to correct previously missed rows
  • Full rebuild of aggregates and lots before an audit using pnpm parse:all

FAQ

How does the skill decide which parser to run?

It uses filename prefixes first; if unclear it sniffs file headers or PDF table titles and maps the detected type to a parser command.

What validation checks are performed after parsing?

It checks record counts, sum totals, date ranges, and for duplicate transaction or lot IDs, then reports any mismatches in the reconciliation summary.