Introduction
Run ML anywhere, on any cloud or on-prem, without changing your code.
Valohai is a modular MLOps platform that orchestrates your ML workflows through configuration, without any invasive SDKs touching your code.
Run experiments on any infrastructure like AWS, Azure, GCP, Oracle, Snowflake, on-premises, Kubernetes, Slurm. Start where it hurts most (data versioning, pipelines, compute efficiency) and expand from there.
Already Running ML Somewhere?
Valohai fits into your existing stack.
We don't require rip-and-replace. If you're using MLflow, SageMaker, custom scripts, or Kubeflow—keep them. Valohai adds orchestration, reproducibility, and compute efficiency on top of what you already have.
Migrate your ML jobs to Valohai — step-by-step guide
See how others migrated — YOLO, Mistral, MMDetection3D, and more
Start Building
New to Valohai?
Get your first execution running in 10 minutes:
Common Tasks
Example Projects
See Valohai in action with production-ready templates:
Computer Vision
YOLO Object Detection — train and deploy YOLOv8
MMDetection3D — 3D object detection pipelines
NLP & LLM
Mistral Fine-Tuning — fine-tune LLMs with distributed training
RAG Documentation Assistant — build a retrieval-augmented chatbot
Audio & Data Engineering
Learn the Platform
Core Concepts
Understand how Valohai works and why it's built this way:
Docker in Valohai — bring your own images
Data Versioning — automatic lineage and caching
Pipelines — chain jobs with dependency graphs
Reproducibility by Default — every run is traceable
How-To Guides
Task-focused instructions for specific workflows:
Get Help
Need support? support.valohai.com
Want training? 🎓 Valohai Academy
See what's new? 🆕 Changelog
Last updated
Was this helpful?
