home / skills / sounder25 / google-antigravity-skills-library / 13_env_bash_mastery
This skill analyzes host hardware to generate optimized, multi-core scripts that maximize performance across CPU, memory, and GPU, tailored to your environment.
npx playbooks add skill sounder25/google-antigravity-skills-library --skill 13_env_bash_masteryReview the files below or copy the command above to add this skill to your agents.
---
name: Environment-Specific Bash Mastery
description: Generate optimized scripts that utilize the specific hardware (cores, GPU, OS) of the execution environment.
version: 1.0.0
author: Antigravity Skills Library
created: 2026-01-16
leverage_score: 4/5
---
# SKILL-013: Environment-Specific Bash Mastery
## Overview
Executes "Hardware-Aware Execution" by profiling the host machine (CPU cores, RAM availability, OS Version, GPU presence) to generate execution scripts that maximize performance. No more single-threaded scripts on a 12-core beast.
## Trigger Phrases
- `optimize script execution`
- `get system profile`
- `run with max performance`
- `generate hardware aware script`
## Inputs
| Parameter | Type | Required | Default | Description |
|-----------|------|----------|---------|-------------|
| `--output-dir` | string | No | `.env` | Directory to save profile |
## Outputs
### 1. SYSTEM_PROFILE.json
Detailed hardware specs:
```json
{
"os": "Microsoft Windows 10.0.22631",
"cpu": {
"name": "AMD Ryzen 9 7900X 12-Core Processor",
"cores": 12,
"logical_processors": 24
},
"memory": {
"total_gb": 64,
"free_gb": 32
},
"flags": {
"has_cuda": true,
"has_avx2": true,
"is_wsl": false
}
}
```
### 2. OPTIMIZED_FLAGS.env
Environment variables ready to be sourced:
```bash
export MAKE_JOBS=24
export OMP_NUM_THREADS=24
export NODE_OPTIONS="--max-old-space-size=60000"
export PYTHON_MULTIPROCESSING=1
```
## Preconditions
1. PowerShell access to WMI/CIM or system commands.
## Implementation
### Script: detect_env.ps1
1. **Queries WMI** (Win32_Processor, Win32_OperatingSystem) to get raw stats.
2. **Detects Capabilities:** Checks for `nvidia-smi` to infer CUDA.
3. **Calculates Optimal Settings:**
- Compile Jobs (`-j`): Logical Cores.
- Node Memory: 90% of Total RAM.
4. **Outputs JSON & ENV profiles.**
## Use Cases
1. **High Performance:** Writing a Python script that uses `multiprocessing` to crunch data, utilizing all 12 cores of "The Beast".
2. **System Hardening:** Generating backup scripts that know exactly which drive is the encrypted local backup based on volume labels.
This skill generates hardware-aware execution scripts and environment profiles that maximize performance on the host machine. It profiles CPU cores, memory, OS, and GPU availability, then emits a JSON system profile and an env file with tuned flags. The result is ready-to-source settings and optimized command-line options for build, Python, Node, and parallel workloads.
The tool queries local system interfaces (WMI/CIM on Windows, fallback to common CLI tools) to collect OS, CPU, memory, and GPU info. It detects capabilities like CUDA or AVX2 and computes optimal runtime parameters (e.g., MAKE_JOBS, OMP_NUM_THREADS, Node heap size). Outputs include SYSTEM_PROFILE.json and OPTIMIZED_FLAGS.env so scripts can automatically adapt to the machine.
Which platforms does this detection support?
Primarily Windows via PowerShell WMI/CIM queries; it falls back to common CLI checks and can be extended for Linux/macOS environments.
Will it always use all cores?
By default it suggests using logical processor count but recommends reserving cores on shared systems; you can override computed values.
How does GPU detection work?
It checks for vendor tools like nvidia-smi and common GPU drivers to infer CUDA availability and sets flags accordingly.