home / skills / raintree-technology / claude-starter / shelby

shelby skill

/skills/aptos/shelby

This skill enables decentralized blob storage on Aptos with erasure coding, SDKs, and private network access for scalable video, AI, and datasets.

npx playbooks add skill raintree-technology/claude-starter --skill shelby

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
5.0 KB
---
name: shelby
description: Shelby Protocol expert for decentralized blob storage on Aptos blockchain. Covers erasure coding (Clay Codes), TypeScript SDK, smart contracts, CLI tools, storage providers, and dApp integration. High-performance storage for video streaming, AI training, and large datasets. Triggers on Shelby, Shelby Protocol, decentralized storage, Aptos storage, blob storage, ShelbyNodeClient, erasure coding, Clay Codes.
allowed-tools: Read, Write, Edit, Grep, Glob, Bash
model: sonnet
license: MIT
metadata:
  author: raintree
  version: "1.0"
---

# Shelby Protocol Expert

Shelby is a decentralized blob storage network on Aptos blockchain with erasure coding, micropayment channels, and dedicated private bandwidth.

## When to Use

- Building dApps that need decentralized storage
- Integrating Shelby SDK (TypeScript/Node.js/Browser)
- Understanding erasure coding and data durability
- Working with storage providers and RPC servers
- Video streaming, AI training data, large datasets

## Architecture Overview

```
User (Public Internet)
  ↓
Shelby RPC Server (handles blob operations)
  ↓ (Private Fiber Network)
Storage Provider Servers (16 per placement group)
  ↓
Aptos L1 Blockchain (state management)
```

**Key Components:**
- **Aptos Smart Contract** - System state, audits, settlements
- **Storage Providers** - Store erasure-coded chunks
- **RPC Servers** - User-facing API, erasure coding/decoding
- **Private Network** - Dedicated fiber for internal communication

## Data Model

### Erasure Coding (Clay Codes)

Data is split and encoded for durability:

```
10MB Blob → Split into chunksets → Erasure code each
                                         ↓
                               16 chunks per chunkset
                               (10 data + 6 parity)
                                         ↓
                               Distributed to 16 storage providers
```

**Recovery:** Any 10 of 16 chunks can reconstruct data

**Why Clay Codes?**
- Same storage efficiency as Reed-Solomon
- 4x less bandwidth during recovery
- Optimal for large-scale distributed storage

### Blob Naming

```
<account>/<user-defined-path>

Examples:
0x123.../videos/intro.mp4
0x123.../datasets/training/batch-001.parquet
```

- Max 1024 characters
- Must NOT end with `/`
- Flat namespace (no directories, just paths)

## SDK Integration

### Installation

```bash
npm install @shelby-protocol/sdk @aptos-labs/ts-sdk
```

### Node.js Client

```typescript
import { ShelbyNodeClient } from "@shelby-protocol/sdk/node";
import { Network } from "@aptos-labs/ts-sdk";

const client = new ShelbyNodeClient({
  network: Network.SHELBYNET,
  apiKey: process.env.SHELBY_API_KEY,
});
```

### Browser Client

```typescript
import { ShelbyClient } from "@shelby-protocol/sdk/browser";
import { Network } from "@aptos-labs/ts-sdk";

const client = new ShelbyClient({
  network: Network.SHELBYNET,
  apiKey: process.env.SHELBY_API_KEY,
});
```

## Common Operations

### Upload Blob

```typescript
const result = await client.uploadBlob({
  blobName: "user/data/file.txt",
  data: fileBuffer,
  expirationTimestamp: Date.now() + 30 * 24 * 60 * 60 * 1000, // 30 days
});
```

### Download Blob

```typescript
// Full blob
const data = await client.getBlob("user/data/file.txt");

// Byte range (efficient for large files)
const partial = await client.getBlob("user/data/file.txt", {
  range: { start: 0, end: 1024 },
});
```

### Multipart Upload (Large Files)

For files > 10MB:

```typescript
const upload = await client.startMultipartUpload({
  blobName: "large-dataset.bin",
  expirationTimestamp: futureTimestamp,
});

for (const [index, part] of fileParts.entries()) {
  await client.uploadPart({
    uploadId: upload.id,
    partNumber: index,
    data: part,
  });
}

await client.completeMultipartUpload({ uploadId: upload.id });
```

### Session Management

Reuse sessions for multiple operations:

```typescript
const session = await client.createSession({
  rpcUrl: "https://api.shelbynet.shelby.xyz/shelby",
  paymentAmount: 1000000, // ShelbyUSD micro-units
});

// Multiple reads on same session
for (const blobName of blobsToDownload) {
  await client.getBlob(blobName, { session });
}

await session.close();
```

## Token Economics

### Two-Token Model

| Token | Purpose |
|-------|---------|
| **APT** | Blockchain gas fees |
| **ShelbyUSD** | Storage and bandwidth payments |

### Funding

```bash
# APT from faucet
aptos account fund-with-faucet --profile my-profile --amount 1000000000

# ShelbyUSD from faucet
# Visit: https://faucet.shelbynet.shelby.xyz
```

## Use Cases

**Ideal Workloads:**
- Video streaming (high bandwidth reads)
- AI training/inference (large datasets)
- Data analytics (read-heavy patterns)
- Content delivery (static assets)
- Archival storage (long-term retention)

**Trade-offs:**
- Optimized for large files (10MB+ chunksets)
- Blobs are immutable (no updates, only overwrite)
- User-pays model (storage + bandwidth)

## Resources

- **Documentation:** https://docs.shelby.cloud
- **Faucet:** https://faucet.shelbynet.shelby.xyz
- **SDK:** `@shelby-protocol/sdk`

Overview

This skill is a Shelby Protocol expert for decentralized blob storage on the Aptos blockchain. It explains erasure coding with Clay Codes, the TypeScript SDK and clients, smart contract interactions, CLI workflows, storage provider topology, and dApp integration patterns. The focus is on building high-performance storage solutions for video streaming, AI training, and large datasets.

How this skill works

The skill inspects the Shelby architecture: RPC servers that handle blob operations, a private fiber network connecting provider servers, and Aptos smart contracts for state, audits, and settlements. It explains how data is split into chunksets, encoded using Clay Codes into 16 chunks (10 data + 6 parity), and distributed across storage providers so any 10 of 16 chunks can reconstruct the blob. The skill covers SDK usage patterns, multipart uploads, session reuse, and token funding for APT and ShelbyUSD.

When to use it

  • Building dApps that require decentralized, durable large-object storage
  • Integrating the Shelby TypeScript SDK in Node.js or browser apps
  • Uploading or streaming large media and training datasets (10MB+)
  • Implementing multipart uploads and efficient byte-range reads
  • Running storage provider nodes or troubleshooting RPC interactions

Best practices

  • Prefer multipart uploads for files larger than 10MB to match chunkset boundaries
  • Reuse sessions for multiple reads to reduce micropayment overhead and latency
  • Use byte-range requests for streaming and partial reads to save bandwidth
  • Design blobs immutable; use overwrite semantics rather than in-place edits
  • Monitor provider health and verify recovery by testing with minimum-chunk reconstruction

Example use cases

  • Live video streaming service that reads large segments via byte-range requests
  • Training pipeline that stores terabytes of model data and retrieves shards efficiently
  • Content delivery network using Shelby for static assets with high-read throughput
  • Archival system storing immutable datasets with long expiration timestamps
  • dApp that charges users in ShelbyUSD for bandwidth and storage with micropayment sessions

FAQ

Why Clay Codes instead of Reed-Solomon?

Clay Codes provide similar storage efficiency to Reed-Solomon but require up to 4x less bandwidth during recovery, making them optimal for large-scale distributed storage.

What are blob naming constraints?

Blob names are <account>/<user-defined-path>, up to 1024 characters, must not end with '/', and the namespace is flat—use paths only as identifiers.