home / skills / openclaw / skills / mlops-automation-cn
This skill helps automate ML workflows by guiding task runners, containerization, CI/CD, and experiment tracking across projects.
npx playbooks add skill openclaw/skills --skill mlops-automation-cnReview the files below or copy the command above to add this skill to your agents.
---
name: mlops-automation-cn
version: 1.0.0
description: Task automation, containerization, CI/CD, and experiment tracking
license: MIT
---
# MLOps Automation 🤖
Automate tasks, containers, CI/CD, and ML experiments.
## Features
### 1. Task Runner (just) ⚡
Copy justfile:
```bash
cp references/justfile ../your-project/
```
Tasks:
- `just check` - Run all checks
- `just test` - Run tests
- `just build` - Build package
- `just clean` - Remove artifacts
- `just train` - Run training
### 2. Docker 🐳
Multi-stage build:
```bash
cp references/Dockerfile ../your-project/
docker build -t my-model .
docker run my-model
```
Optimizations:
- Layer caching (uv sync before copy src/)
- Minimal runtime image
- Non-root user
### 3. CI/CD (GitHub Actions) 🔄
Automated pipeline:
```bash
cp references/ci-workflow.yml ../your-project/.github/workflows/ci.yml
```
Runs on push/PR:
- Lint (Ruff + MyPy)
- Test (pytest + coverage)
- Build (package + Docker)
## Quick Start
```bash
# Setup task runner
cp references/justfile ./
# Setup CI
mkdir -p .github/workflows
cp references/ci-workflow.yml .github/workflows/ci.yml
# Setup Docker
cp references/Dockerfile ./
# Test locally
just check
docker build -t test .
```
## MLflow Tracking
```python
import mlflow
mlflow.autolog()
with mlflow.start_run():
mlflow.log_param("lr", 0.001)
model.fit(X, y)
mlflow.log_metric("accuracy", acc)
```
## Author
Converted from [MLOps Coding Course](https://github.com/MLOps-Courses/mlops-coding-skills)
## Changelog
### v1.0.0 (2026-02-18)
- Initial OpenClaw conversion
- Added justfile template
- Added Dockerfile
- Added CI workflow
This skill automates routine MLOps tasks including task running, containerization, CI/CD pipelines, and experiment tracking. I provide templates and practical defaults so you can add a task runner, Docker build, and GitHub Actions workflow to any Python ML project quickly. The goal is reproducible builds, reliable pipelines, and simple experiment logging with MLflow.
I include a justfile for common project tasks (check, test, build, train, clean), a multi-stage Dockerfile optimized for layer caching and minimal runtime images, and a GitHub Actions CI workflow that lints, tests, and builds on push and PR. For experiment tracking, the skill shows MLflow autolog setup and basic logging examples so runs, params, and metrics are captured automatically. Copy the provided templates into your repo and run the commands to integrate the automation immediately.
How do I start using the templates?
Copy the provided justfile, Dockerfile, and CI workflow into your project folders and run the listed commands (just check, docker build, etc.).
Does the Dockerfile support layer caching for faster builds?
Yes. The Dockerfile is structured to sync dependencies and use layer caching before copying source files to speed iterative builds.
How is experiment metadata captured?
MLflow autolog captures parameters, metrics, and artifacts during training; the example shows explicit logging of params and metrics within a run.