home / skills / jeremylongshore / claude-code-plugins-plus-skills / bigquery-table-creator

bigquery-table-creator skill

/skills/14-gcp-skills/bigquery-table-creator

This skill automates bigquery table creator tasks by generating production-ready configurations and best-practice guidance for GCP projects.

npx playbooks add skill jeremylongshore/claude-code-plugins-plus-skills --skill bigquery-table-creator

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
2.1 KB
---
name: "bigquery-table-creator"
description: |
  Create bigquery table creator operations. Auto-activating skill for GCP Skills.
  Triggers on: bigquery table creator, bigquery table creator
  Part of the GCP Skills skill category. Use when working with bigquery table creator functionality. Trigger with phrases like "bigquery table creator", "bigquery creator", "bigquery".
allowed-tools: "Read, Write, Edit, Bash(gcloud:*)"
version: 1.0.0
license: MIT
author: "Jeremy Longshore <[email protected]>"
---

# Bigquery Table Creator

## Overview

This skill provides automated assistance for bigquery table creator tasks within the GCP Skills domain.

## When to Use

This skill activates automatically when you:
- Mention "bigquery table creator" in your request
- Ask about bigquery table creator patterns or best practices
- Need help with google cloud platform skills covering compute, storage, bigquery, vertex ai, and gcp-specific services.

## Instructions

1. Provides step-by-step guidance for bigquery table creator
2. Follows industry best practices and patterns
3. Generates production-ready code and configurations
4. Validates outputs against common standards

## Examples

**Example: Basic Usage**
Request: "Help me with bigquery table creator"
Result: Provides step-by-step guidance and generates appropriate configurations


## Prerequisites

- Relevant development environment configured
- Access to necessary tools and services
- Basic understanding of gcp skills concepts


## Output

- Generated configurations and code
- Best practice recommendations
- Validation results


## Error Handling

| Error | Cause | Solution |
|-------|-------|----------|
| Configuration invalid | Missing required fields | Check documentation for required parameters |
| Tool not found | Dependency not installed | Install required tools per prerequisites |
| Permission denied | Insufficient access | Verify credentials and permissions |


## Resources

- Official documentation for related tools
- Best practices guides
- Community examples and tutorials

## Related Skills

Part of the **GCP Skills** skill category.
Tags: gcp, bigquery, vertex-ai, cloud-run, firebase

Overview

This skill automates creation and configuration of BigQuery tables within GCP-focused workflows. It generates production-ready table DDL, schema definitions, partitioning/clustering strategies, and deployment-ready code snippets. Use it to standardize table creation, enforce patterns, and accelerate data platform tasks.

How this skill works

The skill inspects user intent and schema requirements, then produces step-by-step instructions, SQL DDL, and infrastructure snippets (Terraform or Python client code) for table creation. It validates common issues such as missing required fields, incompatible data types, and recommended partitioning or clustering choices. Outputs include sample queries, IAM considerations, and checks for permissions and dependencies.

When to use it

  • You need a new BigQuery table schema and DDL generated quickly.
  • You want consistent partitioning, clustering, and cost-optimized table designs.
  • You need runnable examples in Python, gcloud, or Terraform for CI/CD deployment.
  • You want validation of schema, types, and access controls before provisioning.
  • You’re building templates for data engineering or analytics onboarding.

Best practices

  • Design schemas to minimize nested RECORDs when unnecessary to improve query performance.
  • Use partitioning (ingestion or date) and clustering fields aligned with common filter columns to reduce scan costs.
  • Prefer explicit column types and avoid overly permissive STRING types for structured data.
  • Include table expiration or data retention policies to control storage costs.
  • Apply least-privilege IAM roles for service accounts that create or load table data.

Example use cases

  • Generate DDL for a customer events table with date partitioning and clustering on user_id.
  • Produce Terraform and Python examples to create audit tables with retention policies and access controls.
  • Validate an existing schema for compatibility with streaming inserts and recommend optimizations.
  • Create CI job snippets that run schema validation and deploy BigQuery tables as part of a data platform pipeline.
  • Provide migration steps for converting legacy datasets to partitioned, cost-optimized tables.

FAQ

What inputs does the skill need to generate a table?

Provide table name, dataset, field names and types, desired partitioning/clustering, and any retention or labeling requirements.

Can it produce Terraform or code examples?

Yes. It generates Terraform, Python (google-cloud-bigquery), and gcloud CLI snippets tailored to the requested deployment pattern.

How does it handle permissions errors?

It flags common permission gaps and suggests required IAM roles and service account bindings to enable table creation.