Migrate with AI Coding Agent Skills
Valohai is built on standard, well-understood components: YAML configuration, JSON for metrics, plain file I/O for data, and argparse for parameters. No proprietary SDKs, no framework entanglement, no magic.
That design choice pays off in a big way when it comes to migration. Every step of a Valohai migration follows a clear, repeatable pattern:
Read your existing code
Add a small configuration or path change
Validate with
vh lint
Because these steps are predictable and pattern-based, they're a perfect fit for AI coding agents. We've built a set of open-source Agent Skills that teach your AI assistant how to migrate ML projects to Valohai, step by step.
What Are Agent Skills?
Agent Skills are knowledge modules that plug into AI coding agents. They give your agent deep context about Valohai's conventions, file paths, YAML syntax, and best practices — so it handles Valohai-specific patterns correctly instead of guessing.
Each skill covers one part of the migration:
valohai-yaml-step
Creates valohai.yaml step definitions from your existing scripts
valohai-migrate-parameters
Converts hardcoded values to Valohai-managed parameters
valohai-migrate-metrics
Adds experiment tracking via JSON output
valohai-migrate-data
Migrates data loading to Valohai's input/output system
valohai-design-pipelines
Designs multi-step pipelines from your workflow
valohai-project-run
Sets up projects and runs executions via the CLI
Supported AI Coding Agents
The skills work with all major AI coding agents:
Claude Code — Anthropic's CLI agent
GitHub Copilot — GitHub's coding assistant
Cursor — AI-first code editor
Zencoder — AI coding agent
Windsurf — Codeium's AI editor
Gemini CLI — Google's CLI agent
Codex CLI — OpenAI's CLI agent
And 30+ other agents that follow the open Agent Skills specification.
Install the Skills
This auto-detects which agents you have installed and configures skills for each of them.
To install for a specific agent:
Migrate Your Project
Once installed, you can tell your agent to use the Valohai skills, to ensure they get loaded. Here's what a typical migration session looks like.
Create Your Steps and Pipeline
The agent scans your scripts, identifies frameworks and dependencies, picks appropriate Docker images, generates step definitions with parameters, inputs, outputs, and metrics — and wires it all together into a pipeline. It runs vh lint to validate the result.
Run Your First Execution
The agent walks you through vh project create, links your directory, and fires off your first execution with vh execution run train-model --adhoc --watch.
Debug Failed Executions
When something breaks, the skills also cover debugging:
The agent pulls the logs, identifies the error, and suggests a fix — whether it's a missing dependency, a wrong file path, or a YAML misconfiguration.
Why This Works
Valohai migrations are mechanical, not creative. Each step follows a well-defined pattern:
Parameters — Find hardcoded values, add argparse, declare in YAML
Metrics — Find where metrics are computed, print as JSON
Data — Find cloud SDK calls, replace with
/valohai/inputs/pathsOutputs — Find save calls, redirect to
/valohai/outputs/
These patterns are the same whether you're migrating a PyTorch training script, a TensorFlow preprocessing pipeline, or a scikit-learn evaluation job. The agent applies the same rules each time, consistently.
No SDK to learn. No framework-specific integration to configure. Your code stays portable — the configuration lives in valohai.yaml where it belongs.
Next Steps
Migration overview — The full migration strategy
Define your job types — Manual guide to writing
valohai.yamlWhy migrate to Valohai? — Business case and phased adoption
Last updated
Was this helpful?
