home / skills / sounder25 / google-antigravity-skills-library / 13_env_bash_mastery

13_env_bash_mastery skill

/13_env_bash_mastery

This skill analyzes host hardware to generate optimized, multi-core scripts that maximize performance across CPU, memory, and GPU, tailored to your environment.

npx playbooks add skill sounder25/google-antigravity-skills-library --skill 13_env_bash_mastery

Review the files below or copy the command above to add this skill to your agents.

Files (5)
SKILL.md
2.1 KB
---
name: Environment-Specific Bash Mastery
description: Generate optimized scripts that utilize the specific hardware (cores, GPU, OS) of the execution environment.
version: 1.0.0
author: Antigravity Skills Library
created: 2026-01-16
leverage_score: 4/5
---

# SKILL-013: Environment-Specific Bash Mastery

## Overview

Executes "Hardware-Aware Execution" by profiling the host machine (CPU cores, RAM availability, OS Version, GPU presence) to generate execution scripts that maximize performance. No more single-threaded scripts on a 12-core beast.

## Trigger Phrases

- `optimize script execution`
- `get system profile`
- `run with max performance`
- `generate hardware aware script`

## Inputs

| Parameter | Type | Required | Default | Description |
|-----------|------|----------|---------|-------------|
| `--output-dir` | string | No | `.env` | Directory to save profile |

## Outputs

### 1. SYSTEM_PROFILE.json

Detailed hardware specs:
```json
{
  "os": "Microsoft Windows 10.0.22631",
  "cpu": {
    "name": "AMD Ryzen 9 7900X 12-Core Processor",
    "cores": 12,
    "logical_processors": 24
  },
  "memory": {
    "total_gb": 64,
    "free_gb": 32
  },
  "flags": {
    "has_cuda": true,
    "has_avx2": true,
    "is_wsl": false
  }
}
```

### 2. OPTIMIZED_FLAGS.env

Environment variables ready to be sourced:
```bash
export MAKE_JOBS=24
export OMP_NUM_THREADS=24
export NODE_OPTIONS="--max-old-space-size=60000"
export PYTHON_MULTIPROCESSING=1
```

## Preconditions

1. PowerShell access to WMI/CIM or system commands.

## Implementation

### Script: detect_env.ps1

1. **Queries WMI** (Win32_Processor, Win32_OperatingSystem) to get raw stats.
2. **Detects Capabilities:** Checks for `nvidia-smi` to infer CUDA.
3. **Calculates Optimal Settings:**
   - Compile Jobs (`-j`): Logical Cores.
   - Node Memory: 90% of Total RAM.
4. **Outputs JSON & ENV profiles.**

## Use Cases

1. **High Performance:** Writing a Python script that uses `multiprocessing` to crunch data, utilizing all 12 cores of "The Beast".
2. **System Hardening:** Generating backup scripts that know exactly which drive is the encrypted local backup based on volume labels.

Overview

This skill generates hardware-aware execution scripts and environment profiles that maximize performance on the host machine. It profiles CPU cores, memory, OS, and GPU availability, then emits a JSON system profile and an env file with tuned flags. The result is ready-to-source settings and optimized command-line options for build, Python, Node, and parallel workloads.

How this skill works

The tool queries local system interfaces (WMI/CIM on Windows, fallback to common CLI tools) to collect OS, CPU, memory, and GPU info. It detects capabilities like CUDA or AVX2 and computes optimal runtime parameters (e.g., MAKE_JOBS, OMP_NUM_THREADS, Node heap size). Outputs include SYSTEM_PROFILE.json and OPTIMIZED_FLAGS.env so scripts can automatically adapt to the machine.

When to use it

  • Before running CPU- or memory-bound builds and tests to set parallel job counts
  • When launching data processing or multiprocessing Python jobs to avoid oversubscription
  • In CI agents to automatically adapt workloads to the runner size
  • When preparing Docker hosts or VM images to bake tuned defaults
  • To generate reproducible environment files for performance-sensitive deployments

Best practices

  • Run the detection step on the actual target host rather than a CI shim or container without visibility
  • Source the generated OPTIMIZED_FLAGS.env at process start to apply tuned variables
  • Cap parallelism where other services run concurrently (reserve 1–2 cores on shared hosts)
  • Re-run profiling after major hardware or OS updates to refresh settings
  • Validate GPU detection by checking vendor tools (e.g., nvidia-smi) before enabling CUDA-specific flags

Example use cases

  • Generate a SYSTEM_PROFILE.json and set MAKE_JOBS for compiling a large C++ project on a 12-core workstation
  • Produce environment variables that set Python multiprocessing counts and Node heap size for a memory-heavy ETL job
  • Tune build agents in CI to use the detected logical processor count rather than a hard-coded value
  • Create backup scripts that incorporate drive layout and volume labels discovered during profiling
  • Detect CUDA presence and adjust ML training launch scripts to use GPU vs CPU execution paths

FAQ

Which platforms does this detection support?

Primarily Windows via PowerShell WMI/CIM queries; it falls back to common CLI checks and can be extended for Linux/macOS environments.

Will it always use all cores?

By default it suggests using logical processor count but recommends reserving cores on shared systems; you can override computed values.

How does GPU detection work?

It checks for vendor tools like nvidia-smi and common GPU drivers to infer CUDA availability and sets flags accordingly.