home / skills / fusengine / agents / visionos

This skill assists developers building visionOS apps by outlining spatial computing patterns, RealityKit usage, and immersive space workflows for efficient

npx playbooks add skill fusengine/agents --skill visionos

Review the files below or copy the command above to add this skill to your agents.

Files (4)
SKILL.md
2.0 KB
---
name: visionos
description: visionOS platform-specific development with spatial computing, RealityKit, immersive spaces, and volumes. Use when building Vision Pro apps, 3D experiences, or mixed reality features.
versions:
  visionos: 26
user-invocable: false
references: references/spatial-computing.md, references/realitykit.md, references/ornaments.md
related-skills: swift-core, swiftui-core, mcp-tools
---

# visionOS Platform

visionOS-specific development for Apple Vision Pro spatial computing.

## Agent Workflow (MANDATORY)

Before ANY implementation, use `TeamCreate` to spawn 3 agents:

1. **fuse-ai-pilot:explore-codebase** - Analyze existing visionOS patterns
2. **fuse-ai-pilot:research-expert** - Verify latest visionOS 26 docs via Context7/Exa
3. **mcp__apple-docs__search_apple_docs** - Check spatial computing patterns

After implementation, run **fuse-ai-pilot:sniper** for validation.

---

## Overview

### When to Use

- Building Vision Pro applications
- Creating 3D spatial experiences
- Mixed reality features
- Immersive environments
- Hand and eye tracking

### Why visionOS Skill

| Feature | Benefit |
|---------|---------|
| Spatial computing | 3D interaction |
| RealityKit | 3D content rendering |
| Immersive spaces | Full environment |
| Volumes | 3D bounded content |

---

## Scene Types

| Scene | Description |
|-------|-------------|
| WindowGroup | 2D windows in space |
| Volume | 3D bounded content |
| ImmersiveSpace | Full immersive experience |

---

## Reference Guide

| Need | Reference |
|------|-----------|
| Windows, volumes, spaces | [spatial-computing.md](references/spatial-computing.md) |
| RealityView, 3D content | [realitykit.md](references/realitykit.md) |
| Attachments, UI ornaments | [ornaments.md](references/ornaments.md) |

---

## Best Practices

1. **Start with windows** - Familiar 2D first
2. **Add depth gradually** - Volumes for 3D
3. **Use ornaments** - Attach 2D UI to 3D
4. **Respect space** - Don't overwhelm user
5. **Hand tracking** - Natural interactions
6. **Eye comfort** - Avoid rapid movements

Overview

This skill guides visionOS platform-specific development for Apple Vision Pro, focusing on spatial computing, RealityKit, immersive spaces, and volumes. It helps teams build 3D experiences, mixed reality features, and comfortable user interactions using TypeScript-based workflows. The skill emphasizes practical patterns, scene types, and a mandatory agent workflow for safe, up-to-date implementation.

How this skill works

Before any implementation, spawn a team of three agents using TeamCreate: fuse-ai-pilot:explore-codebase to analyze existing visionOS patterns, fuse-ai-pilot:research-expert to verify the latest visionOS 26 docs via Context7/Exa, and mcp__apple-docs__search_apple_docs to check spatial computing patterns in Apple documentation. During development, the skill maps common scene types (WindowGroup, Volume, ImmersiveSpace) to appropriate RealityKit and UI patterns and recommends incremental depth and ornament strategies. After implementation, run fuse-ai-pilot:sniper to validate behavior, compliancy with guidelines, and integration with hand/eye tracking and volume constraints.

When to use it

  • Building Vision Pro applications and prototypes
  • Designing 3D spatial experiences and interactive volumes
  • Implementing mixed reality features that use hand or eye tracking
  • Creating immersive spaces that require full-environment rendering
  • Migrating 2D app interfaces into spatially aware windows

Best practices

  • Always run the TeamCreate agent workflow before coding and fuse-ai-pilot:sniper after implementation for verification
  • Start with 2D WindowGroup patterns before adding volumetric content to reduce complexity
  • Add depth and volumes gradually; reserve ImmersiveSpace for full-environment use cases
  • Use ornaments to attach 2D UI to 3D scenes to preserve usability and clarity
  • Prioritize hand tracking and eye comfort: avoid rapid movements and visual overload

Example use cases

  • Convert an existing iPad app UI into a WindowGroup floating in space, then add a lightweight Volume for contextual 3D previews
  • Build a RealityKit-based product configurator inside a bounded Volume with touch and hand gestures
  • Create an ImmersiveSpace walkthrough that uses eye tracking to guide attention and hand gestures to manipulate objects
  • Prototype a mixed reality collaboration scene where participants share a spatial workspace with anchored ornaments