home / skills / openclaw / skills / mlops-automation-cn

This skill helps automate ML workflows by guiding task runners, containerization, CI/CD, and experiment tracking across projects.

npx playbooks add skill openclaw/skills --skill mlops-automation-cn

Review the files below or copy the command above to add this skill to your agents.

Files (4)
SKILL.md
1.6 KB
---
name: mlops-automation-cn
version: 1.0.0
description: Task automation, containerization, CI/CD, and experiment tracking
license: MIT
---

# MLOps Automation 🤖

Automate tasks, containers, CI/CD, and ML experiments.

## Features

### 1. Task Runner (just) ⚡

Copy justfile:

```bash
cp references/justfile ../your-project/
```

Tasks:
- `just check` - Run all checks
- `just test` - Run tests
- `just build` - Build package
- `just clean` - Remove artifacts
- `just train` - Run training

### 2. Docker 🐳

Multi-stage build:

```bash
cp references/Dockerfile ../your-project/
docker build -t my-model .
docker run my-model
```

Optimizations:
- Layer caching (uv sync before copy src/)
- Minimal runtime image
- Non-root user

### 3. CI/CD (GitHub Actions) 🔄

Automated pipeline:

```bash
cp references/ci-workflow.yml ../your-project/.github/workflows/ci.yml
```

Runs on push/PR:
- Lint (Ruff + MyPy)
- Test (pytest + coverage)
- Build (package + Docker)

## Quick Start

```bash
# Setup task runner
cp references/justfile ./

# Setup CI
mkdir -p .github/workflows
cp references/ci-workflow.yml .github/workflows/ci.yml

# Setup Docker
cp references/Dockerfile ./

# Test locally
just check
docker build -t test .
```

## MLflow Tracking

```python
import mlflow

mlflow.autolog()
with mlflow.start_run():
    mlflow.log_param("lr", 0.001)
    model.fit(X, y)
    mlflow.log_metric("accuracy", acc)
```

## Author

Converted from [MLOps Coding Course](https://github.com/MLOps-Courses/mlops-coding-skills)

## Changelog

### v1.0.0 (2026-02-18)
- Initial OpenClaw conversion
- Added justfile template
- Added Dockerfile
- Added CI workflow

Overview

This skill automates routine MLOps tasks including task running, containerization, CI/CD pipelines, and experiment tracking. I provide templates and practical defaults so you can add a task runner, Docker build, and GitHub Actions workflow to any Python ML project quickly. The goal is reproducible builds, reliable pipelines, and simple experiment logging with MLflow.

How this skill works

I include a justfile for common project tasks (check, test, build, train, clean), a multi-stage Dockerfile optimized for layer caching and minimal runtime images, and a GitHub Actions CI workflow that lints, tests, and builds on push and PR. For experiment tracking, the skill shows MLflow autolog setup and basic logging examples so runs, params, and metrics are captured automatically. Copy the provided templates into your repo and run the commands to integrate the automation immediately.

When to use it

  • You need repeatable tasks for testing, building, and training in a Python ML repo.
  • You want a minimal, secure Docker image with caching and non-root runtime.
  • You want CI that enforces linting, type checks, tests, and artifact builds on PRs.
  • You need simple experiment tracking with MLflow for parameters and metrics.
  • You are onboarding automation into an existing project quickly using templates.

Best practices

  • Keep the justfile focused on reproducible commands and avoid environment-specific hacks.
  • Use multi-stage Docker builds to separate build dependencies from the runtime image.
  • Run linters (Ruff + MyPy) and tests (pytest + coverage) in CI before building images.
  • Enable MLflow autologging during training and log key hyperparameters and metrics.
  • Store CI secrets and registry credentials in the platform’s secure secrets store.

Example use cases

  • Add a justfile and Dockerfile to a research repo to make experiments portable and reproducible.
  • Set up GitHub Actions to block merges that fail linting, typing, or tests.
  • Build a small non-root runtime image for deploying a trained model to production.
  • Track experiment runs with MLflow to compare hyperparameter sweeps and checkpointed models.
  • Use the task runner to standardize developer workflows: test, build, and train with single commands.

FAQ

How do I start using the templates?

Copy the provided justfile, Dockerfile, and CI workflow into your project folders and run the listed commands (just check, docker build, etc.).

Does the Dockerfile support layer caching for faster builds?

Yes. The Dockerfile is structured to sync dependencies and use layer caching before copying source files to speed iterative builds.

How is experiment metadata captured?

MLflow autolog captures parameters, metrics, and artifacts during training; the example shows explicit logging of params and metrics within a run.