Workload management

We encapsulate machine learning workloads in entities called executions. An execution is a similar concept as “a job” or “an experiment” in other systems, but emphasis being that an execution is a smaller piece in a much larger data science process.

Simply put, an execution is one or more commands run on a remote server, check out Executions documentation page to learn more about the high level principles of executions.

Executions can be anything from data generation and preprocessing to model training and evaluation; What is Valohai? page explains different use-cases in more detail.


Your executions can be said to implement a step e.g. “preprocessing” or “training”.

Steps are defined in the valohai.yaml configuration file. If your project doesn’t yet have a valohai.yaml defined, your executions don’t implement any step.

This section explains how to run and manage data science workloads on Valohai.