home / skills / composiohq / awesome-claude-skills / google_search_console-automation

google_search_console-automation skill

/google_search_console-automation

This skill automates Google Search Console tasks via Rube MCP, enabling quick tool discovery, execution, and batch operations for performance insights.

npx playbooks add skill composiohq/awesome-claude-skills --skill google_search_console-automation

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
4.3 KB
---
name: google_search_console-automation
description: "Automate Google Search Console tasks via Rube MCP (Composio): search performance, URL inspection, sitemaps, and indexing status. Always search tools first for current schemas."
requires:
  mcp: [rube]
---

# Google Search Console Automation via Rube MCP

Automate Google Search Console operations through Composio's Google Search Console toolkit via Rube MCP.

**Toolkit docs**: [composio.dev/toolkits/google_search_console](https://composio.dev/toolkits/google_search_console)

## Prerequisites

- Rube MCP must be connected (RUBE_SEARCH_TOOLS available)
- Active Google Search Console connection via `RUBE_MANAGE_CONNECTIONS` with toolkit `google_search_console`
- Always call `RUBE_SEARCH_TOOLS` first to get current tool schemas

## Setup

**Get Rube MCP**: Add `https://rube.app/mcp` as an MCP server in your client configuration. No API keys needed — just add the endpoint and it works.

1. Verify Rube MCP is available by confirming `RUBE_SEARCH_TOOLS` responds
2. Call `RUBE_MANAGE_CONNECTIONS` with toolkit `google_search_console`
3. If connection is not ACTIVE, follow the returned auth link to complete setup
4. Confirm connection status shows ACTIVE before running any workflows

## Tool Discovery

Always discover available tools before executing workflows:

```
RUBE_SEARCH_TOOLS: queries=[{"use_case": "search performance, URL inspection, sitemaps, and indexing status", "known_fields": ""}]
```

This returns:
- Available tool slugs for Google Search Console
- Recommended execution plan steps
- Known pitfalls and edge cases
- Input schemas for each tool

## Core Workflows

### 1. Discover Available Google Search Console Tools

```
RUBE_SEARCH_TOOLS:
  queries:
    - use_case: "list all available Google Search Console tools and capabilities"
```

Review the returned tools, their descriptions, and input schemas before proceeding.

### 2. Execute Google Search Console Operations

After discovering tools, execute them via:

```
RUBE_MULTI_EXECUTE_TOOL:
  tools:
    - tool_slug: "<discovered_tool_slug>"
      arguments: {<schema-compliant arguments>}
  memory: {}
  sync_response_to_workbench: false
```

### 3. Multi-Step Workflows

For complex workflows involving multiple Google Search Console operations:

1. Search for all relevant tools: `RUBE_SEARCH_TOOLS` with specific use case
2. Execute prerequisite steps first (e.g., fetch before update)
3. Pass data between steps using tool responses
4. Use `RUBE_REMOTE_WORKBENCH` for bulk operations or data processing

## Common Patterns

### Search Before Action
Always search for existing resources before creating new ones to avoid duplicates.

### Pagination
Many list operations support pagination. Check responses for `next_cursor` or `page_token` and continue fetching until exhausted.

### Error Handling
- Check tool responses for errors before proceeding
- If a tool fails, verify the connection is still ACTIVE
- Re-authenticate via `RUBE_MANAGE_CONNECTIONS` if connection expired

### Batch Operations
For bulk operations, use `RUBE_REMOTE_WORKBENCH` with `run_composio_tool()` in a loop with `ThreadPoolExecutor` for parallel execution.

## Known Pitfalls

- **Always search tools first**: Tool schemas and available operations may change. Never hardcode tool slugs without first discovering them via `RUBE_SEARCH_TOOLS`.
- **Check connection status**: Ensure the Google Search Console connection is ACTIVE before executing any tools. Expired OAuth tokens require re-authentication.
- **Respect rate limits**: If you receive rate limit errors, reduce request frequency and implement backoff.
- **Validate schemas**: Always pass strictly schema-compliant arguments. Use `RUBE_GET_TOOL_SCHEMAS` to load full input schemas when `schemaRef` is returned instead of `input_schema`.

## Quick Reference

| Operation | Approach |
|-----------|----------|
| Find tools | `RUBE_SEARCH_TOOLS` with Google Search Console-specific use case |
| Connect | `RUBE_MANAGE_CONNECTIONS` with toolkit `google_search_console` |
| Execute | `RUBE_MULTI_EXECUTE_TOOL` with discovered tool slugs |
| Bulk ops | `RUBE_REMOTE_WORKBENCH` with `run_composio_tool()` |
| Full schema | `RUBE_GET_TOOL_SCHEMAS` for tools with `schemaRef` |

> **Toolkit docs**: [composio.dev/toolkits/google_search_console](https://composio.dev/toolkits/google_search_console)

Overview

This skill automates Google Search Console tasks through Rube MCP using Composio’s google_search_console toolkit. It streamlines search performance queries, URL inspection, sitemaps, and indexing checks while enforcing discovery of current tool schemas before execution. The workflow focuses on safe, schema-compliant operations and reliable connection management.

How this skill works

First call RUBE_SEARCH_TOOLS to discover available Google Search Console tools, tool slugs, execution plans, and input schemas. Use RUBE_MANAGE_CONNECTIONS to ensure an ACTIVE GSC connection, then run operations with RUBE_MULTI_EXECUTE_TOOL or bulk jobs via RUBE_REMOTE_WORKBENCH. Always validate responses, handle pagination, and pass data between steps for multi-step workflows.

When to use it

  • Automating periodic search performance reports and CSV exports
  • Inspecting URL indexing status at scale for SEO audits
  • Submitting or checking sitemaps and monitoring sitemap processing
  • Running multi-step workflows that require fetching then updating GSC data
  • Bulk indexing or revalidation tasks using parallel execution

Best practices

  • Always call RUBE_SEARCH_TOOLS first to get current tool slugs and input schemas
  • Verify the google_search_console connection is ACTIVE via RUBE_MANAGE_CONNECTIONS before running tools
  • Respect pagination: loop on next_cursor or page_token until exhausted
  • Validate all arguments against the returned schema; use RUBE_GET_TOOL_SCHEMAS when schemaRef is returned
  • Implement error checks and re-auth flow: re-run RUBE_MANAGE_CONNECTIONS if tokens expire
  • Use RUBE_REMOTE_WORKBENCH and thread pools for large batch operations with backoff on rate limits

Example use cases

  • Daily automated pull of search performance metrics for dashboard ingestion
  • Mass URL inspection to identify indexing issues after a site migration
  • Automated sitemap submission and follow-up checks to confirm processing
  • Chained workflow: list affected pages, fetch details, then request re-indexing
  • Parallelized bulk validation of canonical and index status across thousands of URLs

FAQ

What do I do first when using this skill?

Always call RUBE_SEARCH_TOOLS to discover current tool capabilities and input schemas before executing any operations.

How do I handle expired connections?

Run RUBE_MANAGE_CONNECTIONS for the google_search_console toolkit and follow the returned auth link to re-authenticate until the connection shows ACTIVE.

Can I run bulk operations safely?

Yes—use RUBE_REMOTE_WORKBENCH with run_composio_tool() loops and ThreadPoolExecutor, implement backoff, and monitor rate limit responses.