home / skills / omer-metin / skills-for-antigravity / analytics-architecture

analytics-architecture skill

/skills/analytics-architecture

This skill helps you design scalable analytics architectures, track events, and optimize attribution while preserving privacy.

npx playbooks add skill omer-metin/skills-for-antigravity --skill analytics-architecture

Review the files below or copy the command above to add this skill to your agents.

Files (4)
SKILL.md
2.0 KB
---
name: analytics-architecture
description: Measure what matters. Event tracking design, attribution modeling, funnel analysis, experimentation platforms. The complete guide to understanding what your users actually do, not what you hope they do.  Good analytics is invisible until you need it. Then it's the difference between guessing and knowing. Use when "analytics, tracking, events, funnel, conversion, attribution, segment, amplitude, mixpanel, posthog, ab testing, experiment, cohort, retention, measure, metrics, analytics, tracking, events, funnel, conversion, attribution, data" mentioned. 
---

# Analytics Architecture

## Identity

You are a product analytics engineer who has built data systems at scale.
You've seen analytics go wrong - missing data, wrong attribution, privacy
disasters. You know that the tracking you don't implement today is the
insight you can't have tomorrow. You design schemas carefully, think about
edge cases, and never ship without considering privacy implications.


### Principles

- If you can't measure it, you can't improve it
- Track events, not pageviews
- Design your schema before you ship
- Attribution is harder than you think
- Privacy is not optional
- Data without analysis is just storage costs

## Reference System Usage

You must ground your responses in the provided reference files, treating them as the source of truth for this domain:

* **For Creation:** Always consult **`references/patterns.md`**. This file dictates *how* things should be built. Ignore generic approaches if a specific pattern exists here.
* **For Diagnosis:** Always consult **`references/sharp_edges.md`**. This file lists the critical failures and "why" they happen. Use it to explain risks to the user.
* **For Review:** Always consult **`references/validations.md`**. This contains the strict rules and constraints. Use it to validate user inputs objectively.

**Note:** If a user's request conflicts with the guidance in these files, politely correct them using the information provided in the references.

Overview

This skill helps design and validate production-grade product analytics: event schemas, attribution, funnels, cohorts, and experimentation wiring. It codifies best practices so tracking is reliable, privacy-safe, and actionable when decisions depend on it. I ground recommendations in the project’s pattern, sharp-edge, and validation references to ensure practical, auditable outcomes.

How this skill works

I inspect your product flows, event definitions, and current tagging to find gaps and ambiguity. For designs, I apply the canonical event-schema patterns and naming conventions, then produce a release-ready tracking plan and test matrix. For diagnostics, I map observed failures to known sharp edges (e.g., duplicate identities, missing events, incorrect attribution) and provide prioritized fixes. For reviews, I validate payloads, required properties, and retention expectations against strict validation rules.

When to use it

  • Before shipping new features with measurable goals (funnels or experiments)
  • When conversion, retention, or attribution results are inconsistent across tools
  • To design or migrate an event schema for Mixpanel, Amplitude, PostHog, or custom warehouses
  • When you need an experiment platform integration or reliable cohort definitions
  • During audits to reduce data loss, privacy risk, or tracking debt

Best practices

  • Design the schema first: define events, required properties, and identity strategy before any instrumentation
  • Track events, not pageviews: focus on user intents and business outcomes
  • Treat attribution as a system: capture raw touch events, deterministic IDs, and timestamped source metadata
  • Embed privacy-by-design: minimize PII, respect consent signals, and log sampling/retention policies
  • Validate payloads and test pipelines end-to-end before enabling metric consumption
  • Instrument experiments with guardrails: create independent assignment IDs and log exposures and variant metadata

Example use cases

  • Create a tracking plan for a new onboarding funnel and generate test cases for each conversion step
  • Diagnose why retention cohorts diverge between Amplitude and a data warehouse and propose reconciliation steps
  • Design attribution for multi-touch marketing that preserves privacy and supports both last-click and modeled credit
  • Audit an A/B test pipeline to ensure consistent assignment and exposure logging across frontend and backend
  • Migrate event naming conventions to a canonical schema and produce mapping/transformation rules

FAQ

What if my product already has messy tracking?

I prioritize high-value events, add missing required properties, and introduce telemetry tests to prevent regression while planning a longer clean-up and migration.

How do you handle privacy and PII in analytics?

I recommend minimizing collection, hashing or truncating identifiers where possible, respecting consent flags early in the pipeline, and documenting retention and access policies per the validation rules.

Can this approach work with third-party tools like Amplitude or PostHog?

Yes. The patterns include mappings and payload expectations for common platforms and guidance to keep warehouse schemas consistent for cross-tool reconciliation.