home / skills / charleswiltgen / axiom / axiom-ios-ai

This skill helps implement Apple Intelligence and on-device AI using Foundation Models, LanguageModelSession, and Generable for structured output.

npx playbooks add skill charleswiltgen/axiom --skill axiom-ios-ai

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
2.7 KB
---
name: axiom-ios-ai
description: Use when implementing ANY Apple Intelligence or on-device AI feature. Covers Foundation Models, @Generable, LanguageModelSession, structured output, Tool protocol, iOS 26 AI integration.
license: MIT
---

# iOS Apple Intelligence Router

**You MUST use this skill for ANY Apple Intelligence or Foundation Models work.**

## When to Use

Use this router when:
- Implementing Apple Intelligence features
- Using Foundation Models
- Working with LanguageModelSession
- Generating structured output with @Generable
- Debugging AI generation issues
- iOS 26 on-device AI

## Routing Logic

### Foundation Models Work

**Implementation patterns** → `/skill axiom-foundation-models`
- LanguageModelSession basics
- @Generable structured output
- Tool protocol integration
- Streaming with PartiallyGenerated
- Dynamic schemas
- 26 WWDC code examples

**API reference** → `/skill axiom-foundation-models-ref`
- Complete API documentation
- All @Generable examples
- Tool protocol patterns
- Streaming generation patterns

**Diagnostics** → `/skill axiom-foundation-models-diag`
- AI response blocked
- Generation slow
- Guardrail violations
- Context limits exceeded
- Model unavailable

## Decision Tree

1. Implementing Foundation Models / @Generable / Tool protocol? → foundation-models
2. Need API reference / code examples? → foundation-models-ref
3. Debugging AI issues (blocked, slow, guardrails)? → foundation-models-diag

## Anti-Rationalization

| Thought | Reality |
|---------|---------|
| "Foundation Models is just LanguageModelSession" | Foundation Models has @Generable, Tool protocol, streaming, and guardrails. foundation-models covers all. |
| "I'll figure out the AI patterns as I go" | AI APIs have specific error handling and fallback requirements. foundation-models prevents runtime failures. |
| "I've used LLMs before, this is similar" | Apple's on-device models have unique constraints (guardrails, context limits). foundation-models is Apple-specific. |

## Critical Patterns

**foundation-models**:
- LanguageModelSession setup
- @Generable for structured output
- Tool protocol for function calling
- Streaming generation
- Dynamic schema evolution

**foundation-models-diag**:
- Blocked response handling
- Performance optimization
- Guardrail violations
- Context management

## Example Invocations

User: "How do I use Apple Intelligence to generate structured data?"
→ Invoke: `/skill axiom-foundation-models`

User: "My AI generation is being blocked"
→ Invoke: `/skill axiom-foundation-models-diag`

User: "Show me @Generable examples"
→ Invoke: `/skill axiom-foundation-models-ref`

User: "Implement streaming AI generation"
→ Invoke: `/skill axiom-foundation-models`

Overview

This skill routes and enforces patterns for any Apple Intelligence or on-device AI work on xOS, especially iOS 26. Use it as the mandatory entry point for Foundation Models, @Generable structured output, LanguageModelSession, Tool protocol, and streaming generation. It centralizes decision logic so you pick the correct implementation, reference, or diagnostic flow quickly.

How this skill works

The skill inspects your task and routes you to one of three focused modules: foundation-models for implementation patterns, foundation-models-ref for complete API examples and docs, and foundation-models-diag for diagnostics and performance issues. It codifies critical patterns such as LanguageModelSession setup, @Generable schemas, Tool protocol integration, streaming via PartiallyGenerated, and guardrail handling. Follow the decision tree: implementation → foundation-models, references → foundation-models-ref, debugging → foundation-models-diag.

When to use it

  • Implementing any Apple Intelligence feature or on-device Foundation Model integration
  • Generating structured output using @Generable or dynamic schemas
  • Building LanguageModelSession-based flows and Tool protocol integrations
  • Implementing streaming generation or PartiallyGenerated patterns
  • Diagnosing blocked responses, slow generation, guardrail violations, or context limits

Best practices

  • Always route Foundation Models work through the foundation-models module to avoid missing Apple-specific patterns
  • Use @Generable for strict, versionable structured output and prefer dynamic schema evolution when needed
  • Integrate Tool protocol for function-calling style workflows and isolate tool implementations for testability
  • Implement streaming with PartiallyGenerated to deliver progressive UX and reduce time-to-first-token
  • Proactively handle guardrail violations, context limits, and model fallbacks in the diagnostics flow

Example use cases

  • Create a LanguageModelSession that streams incremental results and emits structured @Generable objects
  • Look up comprehensive @Generable examples and API snippets in the foundation-models-ref module
  • Troubleshoot blocked or filtered AI responses using foundation-models-diag diagnostic patterns
  • Implement a Tool protocol bridge to call native device capabilities from a Foundation Model
  • Migrate existing LLM code to iOS 26 on-device AI with proper guardrail and context management

FAQ

Do I have to use this skill for all Apple Intelligence work?

Yes. Use this router for any Foundation Models or on-device AI work to ensure correct patterns and diagnostics are applied.

Which module handles streaming and partially generated output?

The foundation-models implementation module contains streaming patterns, PartiallyGenerated handling, and dynamic schema guidance.