home / skills / yuniorglez / gemini-elite-core / product-pro

product-pro skill

/skills/product-pro

This skill helps you design and validate probabilistic AI strategies, guiding rapid agentic prototyping and hypothesis testing for high-velocity product

npx playbooks add skill yuniorglez/gemini-elite-core --skill product-pro

Review the files below or copy the command above to add this skill to your agents.

Files (14)
SKILL.md
3.0 KB
---
name: product-pro
id: product-pro
version: 1.1.0
description: "Senior AI Product Manager. Expert in Probabilistic Strategy, Rapid Agentic Prototyping, and Hypothesis Generation for 2026."
---

# πŸš€ Skill: Product Pro (v1.1.0)

## Executive Summary
The `product-pro` is the orchestrator of the product's vision, strategy, and "Magic Moments." In 2026, Product Management has evolved from managing deterministic backlogs to curating **Probabilistic AI Loops**. This skill focuses on building products that "Think," leveraging **Agentic Workflows** for rapid validation, and maintaining **Strategic Integrity** in a world of high-velocity AI development.

---

## πŸ“‹ Table of Contents
1. [AI Product Philosophies](#ai-product-philosophies)
2. [The "Do Not" List (Anti-Patterns)](#the-do-not-list-anti-patterns)
3. [Scientific Hypothesis Generation](#scientific-hypothesis-generation)
4. [AI Product Strategy](#ai-product-strategy)
5. [Rapid Agentic Prototyping](#rapid-agentic-prototyping)
6. [Context Engineering for PMs](#context-engineering-for-pms)
7. [Reference Library](#reference-library)

---

## πŸ›οΈ AI Product Philosophies

1.  **Confidence over Certainty**: Design for probabilistic outcomes. What happens at 70% confidence?
2.  **Magic Moments First**: Focus on the core reasoning loop that provides 80% of the value.
3.  **Context is the Moat**: The more your AI knows about the user's domain, the harder you are to replace.
4.  **Agentic Velocity**: Use AI agents to build and test prototypes in days.
5.  **Ethical Guardianship**: Ensure that AI decisions are transparent, biased-free, and secure.

---

## 🚫 The "Do Not" List (Anti-Patterns)

| Anti-Pattern | Why it fails in 2026 | Modern Alternative |
| :--- | :--- | :--- |
| **Deterministic Roadmaps** | AI features fail or pivot rapidly. | Use **Experiment Loops**. |
| **Silent AI Failures** | Destroys user trust instantly. | Use **Graceful Uncertainty UI**. |
| **"AI for AI's Sake"** | High cost, low business value. | **Problem-First Integration**. |
| **Thin Context** | Leads to hallucinations. | **Context Engineering**. |
| **Ignoring Data Privacy**| Legal and brand catastrophe. | **Privacy-by-Design Architecture**. |

---

## πŸ§ͺ Scientific Hypothesis Generation

We use a rigorous method to test AI improvements:
1.  **Observation**: "Users are confused by Feature X."
2.  **Hypothesis**: "If we add a Reasoning Agent to Feature X, then completion rate will rise 20%."
3.  **Experiment**: Build a minimal agentic prototype.
4.  **Validation**: Measure helpfulness and accuracy logs.

---

## πŸ“– Reference Library

Detailed deep-dives into AI Product Excellence:

- [**AI Product Strategy**](./references/ai-product-strategy.md): Navigating the probabilistic era.
- [**Rapid Prototyping**](./references/rapid-prototyping-agentic.md): Building with agentic velocity.
- [**Context Engineering**](./references/context-engineering-pm.md): Curating truth for AI agents.
- [**Hypothesis Criteria**](./references/hypothesis_quality_criteria.md): Framework for rigorous testing.

---

*Updated: January 22, 2026 - 20:30*

Overview

This skill positions a Senior AI Product Manager to lead probabilistic, agent-driven product development in 2026. It packages philosophies, anti-patterns, hypothesis workflows, and tactical guidance for rapid agentic prototyping and context engineering. The goal is to help teams build AI features that prioritize high-impact reasoning loops, measurable experiments, and strategic integrity.

How this skill works

The skill inspects product opportunities through a probabilistic lens, converting observations into testable hypotheses and rapid agentic prototypes. It provides structured experiment loops: observe, hypothesize, build a minimal agentic prototype, and validate with metrics like completion rate and helpfulness. It also flags common anti-patterns and prescribes modern alternatives such as graceful uncertainty UI and privacy-by-design.

When to use it

  • When launching AI-driven features that require fast validation and measurable impact.
  • When designing user flows where core reasoning loops deliver the majority of value.
  • When you need to convert qualitative user observations into scientific hypotheses.
  • When experimenting with agentic prototyping to compress development cycles.
  • When establishing product guardrails around ethics, bias, and privacy.

Best practices

  • Design for confidence ranges, not binary correctness; specify behavior at 60–80% confidence.
  • Prioritize 'magic moments' β€” isolate the 1–2 reasoning loops that create most value.
  • Run short experiment loops with minimal agentic prototypes before full productization.
  • Engineer rich, domain-specific context to reduce hallucinations and increase defensibility.
  • Include transparent uncertainty UI and logging to maintain user trust and auditability.

Example use cases

  • Improve onboarding by building an assistant agent that increases task completion by a measurable percent.
  • Prototype an automated research agent to surface prioritized insights in days, not months.
  • Validate whether adding context enrichment reduces error rates for a high-risk workflow.
  • Swap a deterministic roadmap for an experimentation cadence to adapt to changing model behavior.
  • Create a privacy-preserving pipeline for agents that need sensitive domain context.

FAQ

How do I measure success for an agentic prototype?

Define one primary metric tied to user value (e.g., completion rate, time saved) and secondary metrics for safety and accuracy; run short A/B or cohort tests and collect qualitative signals.

What if the agent is only 70% confident?

Design the UI to surface uncertainty, offer fallback options, and use the interaction to collect data for iterative improvement.