home / skills / web-infra-dev / midscene-skills / chrome-bridge

chrome-bridge skill

/skills/chrome-bridge

This skill automates your real Chrome browser via Midscene Bridge, preserving session state and interacting with on-screen elements from screenshots.

This is most likely a fork of the midscene-computer-chrome-bridge skill from openclaw
npx playbooks add skill web-infra-dev/midscene-skills --skill chrome-bridge

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
9.5 KB
---
name: chrome-bridge-automation
description: |
  Vision-driven browser automation using Midscene Bridge mode. Operates entirely from screenshots — no DOM or accessibility labels required. Can interact with all visible elements on screen regardless of technology stack.

  This mode connects to the user's desktop Chrome browser via the Midscene Chrome Extension, preserving cookies, sessions, and login state.

  Use this skill when the user wants to:
  - Browse, navigate, or open web pages in the user's own Chrome browser
  - Interact with pages that require login sessions, cookies, or existing browser state
  - Scrape, extract, or collect data from websites using the user's real browser
  - Fill out forms, click buttons, or interact with web elements
  - Verify, validate, or test frontend UI behavior
  - Take screenshots of web pages
  - Automate multi-step web workflows
  - Check website content or appearance

  Powered by Midscene.js (https://midscenejs.com)
allowed-tools:
  - Bash
---

# Chrome Bridge Automation

> **CRITICAL RULES — VIOLATIONS WILL BREAK THE WORKFLOW:**
>
> 1. **Never run midscene commands in the background.** Each command must run synchronously so you can read its output (especially screenshots) before deciding the next action. Background execution breaks the screenshot-analyze-act loop.
> 2. **Run only one midscene command at a time.** Wait for the previous command to finish, read the screenshot, then decide the next action. Never chain multiple commands together.
> 3. **Allow enough time for each command to complete.** Midscene commands involve AI inference and screen interaction, which can take longer than typical shell commands. A typical command needs about 1 minute; complex `act` commands may need even longer.
> 4. **Always report task results before finishing.** After completing the automation task, you MUST proactively summarize the results to the user — including key data found, actions completed, screenshots taken, and any relevant findings. Never silently end after the last automation step; the user expects a complete response in a single interaction.

Automate the user's real Chrome browser via the Midscene Chrome Extension (Bridge mode), preserving cookies, sessions, and login state. You (the AI agent) act as the brain, deciding which actions to take based on screenshots.

## Command Format

**CRITICAL — Every command MUST follow this EXACT format. Do NOT modify the command prefix.**

```
npx @midscene/web@1 --bridge <subcommand> [args]
```

- `--bridge` flag is **MANDATORY** here — it activates Bridge mode to connect to the user's desktop Chrome browser

## Prerequisites

The user has already prepared Chrome and the Midscene Extension. Do NOT check browser or extension status before connecting — just connect directly.

Midscene requires models with strong visual grounding capabilities. The following environment variables must be configured — either as system environment variables or in a `.env` file in the current working directory (Midscene loads `.env` automatically):

```bash
MIDSCENE_MODEL_API_KEY="your-api-key"
MIDSCENE_MODEL_NAME="model-name"
MIDSCENE_MODEL_BASE_URL="https://..."
MIDSCENE_MODEL_FAMILY="family-identifier"
```

Example: Gemini (Gemini-3-Flash)

```bash
MIDSCENE_MODEL_API_KEY="your-google-api-key"
MIDSCENE_MODEL_NAME="gemini-3-flash"
MIDSCENE_MODEL_BASE_URL="https://generativelanguage.googleapis.com/v1beta/openai/"
MIDSCENE_MODEL_FAMILY="gemini"
```

Example: Qwen 3.5

```bash
MIDSCENE_MODEL_API_KEY="your-aliyun-api-key"
MIDSCENE_MODEL_NAME="qwen3.5-plus"
MIDSCENE_MODEL_BASE_URL="https://dashscope.aliyuncs.com/compatible-mode/v1"
MIDSCENE_MODEL_FAMILY="qwen3.5"
MIDSCENE_MODEL_REASONING_ENABLED="false"
# If using OpenRouter, set:
# MIDSCENE_MODEL_API_KEY="your-openrouter-api-key"
# MIDSCENE_MODEL_NAME="qwen/qwen3.5-plus"
# MIDSCENE_MODEL_BASE_URL="https://openrouter.ai/api/v1"
```

Example: Doubao Seed 2.0 Lite

```bash
MIDSCENE_MODEL_API_KEY="your-doubao-api-key"
MIDSCENE_MODEL_NAME="doubao-seed-2-0-lite"
MIDSCENE_MODEL_BASE_URL="https://ark.cn-beijing.volces.com/api/v3"
MIDSCENE_MODEL_FAMILY="doubao-seed"
```

Commonly used models: Doubao Seed 2.0 Lite, Qwen 3.5, Zhipu GLM-4.6V, Gemini-3-Pro, Gemini-3-Flash.

If the model is not configured, ask the user to set it up. See [Model Configuration](https://midscenejs.com/model-common-config) for supported providers.

## Commands

### Connect to a Web Page

```bash
npx @midscene/web@1 --bridge connect --url https://example.com
```

### Take Screenshot

```bash
npx @midscene/web@1 --bridge take_screenshot
```

After taking a screenshot, read the saved image file to understand the current page state before deciding the next action.

### Perform Action

Use `act` to interact with the page and get the result. It autonomously handles all UI interactions internally — clicking, typing, scrolling, hovering, waiting, and navigating — so you should give it complex, high-level tasks as a whole rather than breaking them into small steps. Describe **what you want to do and the desired effect** in natural language:

```bash
# specific instructions
npx @midscene/web@1 --bridge act --prompt "click the Login button and fill in the email field with '[email protected]'"
npx @midscene/web@1 --bridge act --prompt "scroll down and click the Submit button"

# or target-driven instructions
npx @midscene/web@1 --bridge act --prompt "click the country dropdown and select Japan"
```

### Disconnect

```bash
npx @midscene/web@1 --bridge disconnect
```

## Workflow Pattern

Bridge mode connects to the user's real Chrome browser. Each CLI command establishes its own temporary connection, but **the browser, tabs, and all state (cookies, login sessions) are always preserved** regardless of whether you disconnect. This makes reconnecting lightweight and lossless.

Follow this pattern:

1. **Connect** to a URL to establish a session
2. **Take screenshot** to see the current state, make sure the page is loaded.
3. **Execute action** using `act` to perform the desired action or target-driven instructions.
4. **Report results** — summarize what was accomplished, present key findings and data extracted during the task, and list any generated files (screenshots, logs, etc.) with their paths
5. **Disconnect** only when the user's overall task is fully complete. **Do NOT disconnect** if the user may have follow-up actions — keep the session available for continued interaction in subsequent conversation turns.

## Best Practices

1. **Always connect first**: Navigate to the target URL with `connect --url` before any interaction.
2. **Be specific about UI elements**: Instead of `"the button"`, say `"the blue Submit button in the contact form"`.
3. **Use natural language**: Describe what you see on the page, not CSS selectors. Say `"the red Buy Now button"` instead of `"#buy-btn"`.
4. **Handle loading states**: After navigation or actions that trigger page loads, take a screenshot to verify the page has loaded.
5. **Disconnect only when fully done**: Only disconnect when the user's overall task is completely finished and no follow-up actions are expected. In multi-turn conversations, skip the disconnect to allow continued browser interaction. Disconnecting is safe — it only closes the CLI-side bridge connection, not the browser or tabs — but reconnecting adds unnecessary overhead if the user wants to continue.
6. **Never run in background**: Every midscene command must run synchronously — background execution breaks the screenshot-analyze-act loop.
7. **Batch related operations into a single `act` command**: When performing consecutive operations within the same page, combine them into one `act` prompt instead of splitting them into separate commands. For example, "fill in the email and password fields, then click the Login button" should be a single `act` call, not three. This reduces round-trips, avoids unnecessary screenshot-analyze cycles, and is significantly faster.
8. **Always report results after completion**: After finishing the automation task, you MUST proactively present the results to the user without waiting for them to ask. This includes: (1) the answer to the user's original question or the outcome of the requested task, (2) key data extracted or observed during execution, (3) screenshots and other generated files with their paths, (4) a brief summary of steps taken. Do NOT silently finish after the last automation command — the user expects complete results in a single interaction.

**Example — Dropdown selection:**

```bash
npx @midscene/web@1 --bridge act --prompt "click the country dropdown and select Japan"
npx @midscene/web@1 --bridge take_screenshot
```

**Example — Form interaction:**

```bash
npx @midscene/web@1 --bridge act --prompt "fill in the email field with '[email protected]' and the password field with 'pass123', then click the Log In button"
npx @midscene/web@1 --bridge take_screenshot
```

## Troubleshooting

### Bridge Mode Connection Failures
- Ask user to check if Chrome is open with the Midscene Extension installed and enabled.
- The Midscene Extension can be installed from the Chrome Web Store: https://chromewebstore.google.com/detail/midscenejs/gbldofcpkknbggpkmbdaefngejllnief
- Check that the 'bridge mode' indicator in the extension shows "Listening" status.
- See the [Bridge Mode documentation](https://midscenejs.com/bridge-mode-by-chrome-extension.html).

### Timeouts
- Web pages may take time to load. After connecting, take a screenshot to verify readiness before interacting.
- For slow pages, wait briefly between steps.

### Screenshots Not Displaying
- The screenshot path is an absolute path to a local file. Use the Read tool to view it.

Overview

This skill automates a real Chrome browser using Midscene Bridge mode, operating entirely from screenshots so no DOM access or accessibility labels are required. It preserves the user's cookies, sessions, and login state by connecting to the Midscene Chrome Extension. Use it to drive, inspect, and validate live pages as they appear in the user's desktop Chrome.

How this skill works

The agent connects to the user's desktop Chrome via the Midscene extension and issues single, synchronous CLI commands that take screenshots or perform visual actions. Each command returns an image or result that the agent reads before deciding the next step, maintaining a screenshot-analyze-act loop. Actions are described in natural language and executed visually, allowing clicks, typing, scrolling, and navigation across any technology stack.

When to use it

  • Automating flows that require existing login sessions, cookies, or browser state.
  • Scraping or extracting data using the user's real browser to avoid bot blocks.
  • Filling forms, clicking buttons, and verifying multi-step workflows visually.
  • Testing or validating frontend UI appearance and behavior on the actual browser.
  • Capturing screenshots of pages as rendered in the user's Chrome for audits or reports.

Best practices

  • Always connect first with the bridge connect command to establish the session.
  • Run one midscene command at a time and wait for its output before the next command.
  • Describe UI targets clearly in natural language (e.g., 'the blue Submit button in the contact form').
  • Batch related operations into a single act prompt to reduce round-trips and keep actions atomic.
  • Take a screenshot after navigation or actions that change the page to confirm the result.
  • Disconnect when finished to free resources and collect all generated screenshots and logs.

Example use cases

  • Log into a web app using the user's existing session and verify a dashboard loads correctly.
  • Fill a multi-field form, submit it, and capture before-and-after screenshots for QA.
  • Scrape product information from a page that blocks headless browsers but allows the real browser.
  • Click through a purchase flow to validate checkout UI and capture any error states.
  • Run visual regression checks by comparing screenshots of pages across states or times.

FAQ

Do I need to install anything before using this skill?

The user must have Chrome open and the Midscene Chrome Extension installed and enabled; model credentials must be set via environment variables.

Can commands run in the background or be chained?

No. Every command must run synchronously and one at a time so the agent can read screenshots before the next action.