home / skills / jeremylongshore / claude-code-plugins-plus-skills / optuna-study-creator

optuna-study-creator skill

/skills/07-ml-training/optuna-study-creator

This skill helps automate optuna study creator workflows by generating production-ready code, configurations, and best-practice guidance for ML training.

npx playbooks add skill jeremylongshore/claude-code-plugins-plus-skills --skill optuna-study-creator

Review the files below or copy the command above to add this skill to your agents.

Files (1)
SKILL.md
2.1 KB
---
name: "optuna-study-creator"
description: |
  Create optuna study creator operations. Auto-activating skill for ML Training.
  Triggers on: optuna study creator, optuna study creator
  Part of the ML Training skill category. Use when working with optuna study creator functionality. Trigger with phrases like "optuna study creator", "optuna creator", "optuna".
allowed-tools: "Read, Write, Edit, Bash(python:*), Bash(pip:*)"
version: 1.0.0
license: MIT
author: "Jeremy Longshore <[email protected]>"
---

# Optuna Study Creator

## Overview

This skill provides automated assistance for optuna study creator tasks within the ML Training domain.

## When to Use

This skill activates automatically when you:
- Mention "optuna study creator" in your request
- Ask about optuna study creator patterns or best practices
- Need help with machine learning training skills covering data preparation, model training, hyperparameter tuning, and experiment tracking.

## Instructions

1. Provides step-by-step guidance for optuna study creator
2. Follows industry best practices and patterns
3. Generates production-ready code and configurations
4. Validates outputs against common standards

## Examples

**Example: Basic Usage**
Request: "Help me with optuna study creator"
Result: Provides step-by-step guidance and generates appropriate configurations


## Prerequisites

- Relevant development environment configured
- Access to necessary tools and services
- Basic understanding of ml training concepts


## Output

- Generated configurations and code
- Best practice recommendations
- Validation results


## Error Handling

| Error | Cause | Solution |
|-------|-------|----------|
| Configuration invalid | Missing required fields | Check documentation for required parameters |
| Tool not found | Dependency not installed | Install required tools per prerequisites |
| Permission denied | Insufficient access | Verify credentials and permissions |


## Resources

- Official documentation for related tools
- Best practices guides
- Community examples and tutorials

## Related Skills

Part of the **ML Training** skill category.
Tags: ml, training, pytorch, tensorflow, sklearn

Overview

This skill automates creation and guidance for Optuna study creator operations in ML training workflows. It provides step-by-step instructions, production-ready code snippets, and validation checks to set up, run, and manage Optuna hyperparameter studies. Use it to simplify experiment setup, enforce best practices, and generate reproducible tuning pipelines.

How this skill works

The skill inspects requests that reference Optuna study creation and responds with tailored guidance: study configuration, sampler and pruner selection, objective definition, and experiment tracking integration. It generates runnable Python code, suggested configuration files, and validation checks for common misconfigurations. It also highlights platform-specific considerations like storage backends and parallel execution.

When to use it

  • You need to create or configure an Optuna study for hyperparameter tuning
  • You want production-ready code samples for Optuna study setup and objective functions
  • You need recommendations on samplers, pruners, or storage backends
  • You want to integrate Optuna with experiment tracking (e.g., MLflow)
  • You need validation of study configuration or error troubleshooting

Best practices

  • Define a clear, reproducible objective function with fixed random seeds and controlled data splits
  • Start with a robust sampler (TPE) and add pruning for long-running trials
  • Persist study state to a durable storage backend (RDB) for parallel or long-lived studies
  • Log parameters and metrics to experiment tracking for traceability
  • Limit search space complexity initially, then expand based on results

Example use cases

  • Generate a Python script that creates an Optuna study with TPE sampler and MedianPruner
  • Create a study configuration that persists to a PostgreSQL backend for distributed trials
  • Produce an objective function scaffold for training a PyTorch model with configurable hyperparameters
  • Validate and fix a failing study configuration message about missing storage or permissions
  • Recommend sampler/pruner choices and search-space ranges for common model types

FAQ

What storage backend should I use for parallel studies?

Use a relational database (e.g., PostgreSQL or MySQL) as the RDB storage backend to safely share study state across processes and machines.

How do I reduce overall tuning time?

Combine an efficient sampler (TPE), aggressive pruning for unpromising trials, and start with a narrower search space. Use parallel workers with a durable RDB backend to scale.