home / skills / tencentcloudbase / cloudbase-mcp / ai-model-web

This skill enables browser apps to generate and stream AI text using CloudBase JS SDK, with Hunyuan or DeepSeek models.

npx playbooks add skill tencentcloudbase/cloudbase-mcp --skill ai-model-web

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
4.8 KB
---
name: ai-model-web
description: Use this skill when developing browser/Web applications (React/Vue/Angular, static websites, SPAs) that need AI capabilities. Features text generation (generateText) and streaming (streamText) via @cloudbase/js-sdk. Built-in models include Hunyuan (hunyuan-2.0-instruct-20251111 recommended) and DeepSeek (deepseek-v3.2 recommended). NOT for Node.js backend (use ai-model-nodejs), WeChat Mini Program (use ai-model-wechat), or image generation (Node SDK only).
alwaysApply: false
---

## When to use this skill

Use this skill for **calling AI models in browser/Web applications** using `@cloudbase/js-sdk`.

**Use it when you need to:**

- Integrate AI text generation in a frontend Web app
- Stream AI responses for better user experience
- Call Hunyuan or DeepSeek models from browser

**Do NOT use for:**

- Node.js backend or cloud functions → use `ai-model-nodejs` skill
- WeChat Mini Program → use `ai-model-wechat` skill
- Image generation → use `ai-model-nodejs` skill (Node SDK only)
- HTTP API integration → use `http-api` skill

---

## Available Providers and Models

CloudBase provides these built-in providers and models:

| Provider | Models | Recommended |
|----------|--------|-------------|
| `hunyuan-exp` | `hunyuan-turbos-latest`, `hunyuan-t1-latest`, `hunyuan-2.0-thinking-20251109`, `hunyuan-2.0-instruct-20251111` | ✅ `hunyuan-2.0-instruct-20251111` |
| `deepseek` | `deepseek-r1-0528`, `deepseek-v3-0324`, `deepseek-v3.2` | ✅ `deepseek-v3.2` |

---

## Installation

```bash
npm install @cloudbase/js-sdk
```

## Initialization

```js
import cloudbase from "@cloudbase/js-sdk";

const app = cloudbase.init({
  env: "<YOUR_ENV_ID>",
  accessKey: "<YOUR_PUBLISHABLE_KEY>"  // Get from CloudBase console
});

const auth = app.auth();
await auth.signInAnonymously();

const ai = app.ai();
```

**Important notes:**

- Always use synchronous initialization with top-level import
- User must be authenticated before using AI features
- Get `accessKey` from CloudBase console

---

## generateText() - Non-streaming

```js
const model = ai.createModel("hunyuan-exp");

const result = await model.generateText({
  model: "hunyuan-2.0-instruct-20251111",  // Recommended model
  messages: [{ role: "user", content: "你好,请你介绍一下李白" }],
});

console.log(result.text);           // Generated text string
console.log(result.usage);          // { prompt_tokens, completion_tokens, total_tokens }
console.log(result.messages);       // Full message history
console.log(result.rawResponses);   // Raw model responses
```

---

## streamText() - Streaming

```js
const model = ai.createModel("hunyuan-exp");

const res = await model.streamText({
  model: "hunyuan-2.0-instruct-20251111",  // Recommended model
  messages: [{ role: "user", content: "你好,请你介绍一下李白" }],
});

// Option 1: Iterate text stream (recommended)
for await (let text of res.textStream) {
  console.log(text);  // Incremental text chunks
}

// Option 2: Iterate data stream for full response data
for await (let data of res.dataStream) {
  console.log(data);  // Full response chunk with metadata
}

// Option 3: Get final results
const messages = await res.messages;  // Full message history
const usage = await res.usage;        // Token usage
```

---

## Type Definitions

```ts
interface BaseChatModelInput {
  model: string;                        // Required: model name
  messages: Array<ChatModelMessage>;    // Required: message array
  temperature?: number;                 // Optional: sampling temperature
  topP?: number;                        // Optional: nucleus sampling
}

type ChatModelMessage =
  | { role: "user"; content: string }
  | { role: "system"; content: string }
  | { role: "assistant"; content: string };

interface GenerateTextResult {
  text: string;                         // Generated text
  messages: Array<ChatModelMessage>;    // Full message history
  usage: Usage;                         // Token usage
  rawResponses: Array<unknown>;         // Raw model responses
  error?: unknown;                      // Error if any
}

interface StreamTextResult {
  textStream: AsyncIterable<string>;    // Incremental text stream
  dataStream: AsyncIterable<DataChunk>; // Full data stream
  messages: Promise<ChatModelMessage[]>;// Final message history
  usage: Promise<Usage>;                // Final token usage
  error?: unknown;                      // Error if any
}

interface Usage {
  prompt_tokens: number;
  completion_tokens: number;
  total_tokens: number;
}
```

---

## Best Practices

1. **Use streaming for long responses** - Better user experience
2. **Handle errors gracefully** - Wrap AI calls in try/catch
3. **Keep accessKey secure** - Use publishable key, not secret key
4. **Initialize early** - Initialize SDK in app entry point
5. **Ensure authentication** - User must be signed in before AI calls

Overview

This skill integrates CloudBase AI models into browser and web applications (React, Vue, Angular, static sites, SPAs) using the @cloudbase/js-sdk. It exposes text generation (generateText) and streaming (streamText) APIs and ships with recommended built-in models such as Hunyuan and DeepSeek. Use it when you want client-side AI text capabilities without running a Node backend. It is not intended for server-side Node.js, WeChat Mini Programs, or image generation.

How this skill works

Initialize the CloudBase SDK in your app entry point, sign in the user (anonymous sign-in supported), then call ai().createModel(provider) to obtain a model instance. generateText returns a single completed response with usage and history, while streamText returns async iterables for incremental text or full data plus promises for final messages and usage. Recommended providers include hunyuan-exp (hunyuan-2.0-instruct-20251111) and deepseek (deepseek-v3.2).

When to use it

  • Add text generation features directly in the browser for chat UIs, assistants, or content tools.
  • Deliver progressive streaming responses to improve perceived latency for long outputs.
  • Call Hunyuan or DeepSeek models from frontend code where a publishable client key is acceptable.
  • Build demos, prototypes, or production SPAs that require on-device AI calls without a server middle layer.
  • When you need token usage, full message history, and raw model responses in the client.

Best practices

  • Initialize the SDK synchronously at the app entry point and authenticate the user before AI calls.
  • Prefer streamText for long responses to render incremental output and improve UX.
  • Wrap AI calls in try/catch and provide clear retry or fallback behavior for network or model errors.
  • Keep accessKey limited and use publishable keys from the CloudBase console; never embed secret keys in client code.
  • Choose recommended models (hunyuan-2.0-instruct-20251111, deepseek-v3.2) for best quality and compatibility.

Example use cases

  • A chat assistant in a React SPA that streams replies token-by-token for faster perceived response.
  • A content generation tool in a single-page app that uses generateText to produce article drafts and shows token usage.
  • An in-browser tutor that uses DeepSeek for knowledge retrieval and Hunyuan for instructive responses.
  • A demo or sandbox site that authenticates anonymously and showcases different model outputs via the CloudBase client.

FAQ

Can I use this skill on a Node.js backend?

No. This skill targets browser and web clients. For Node.js backends or cloud functions use the Node-specific integration instead.

Do I need to authenticate before calling AI features?

Yes. The user must be signed in (anonymous sign-in is supported) before using AI calls from the client.