The valohai.yaml
file serves as a configuration blueprint for your machine learning experiments, allowing you to define all the necessary jobs and their parameters and dependencies in a structured manner. This file is typically stored in your project’s repository, making it easy to version and reproduce experiments.
If you don’t want to write YAML by hand you can use valohai-utils
Python users can use valohai-utils to define Valohai steps in their code, and then run vh yaml step myfile.py
to generate the YAML file. Check the valohai-utils section for more information.
Here’s a brief overview of what a valohai.yaml
file typically contains:
- Name: A user-friendly label for a step, pipeline, or deployment. Users trigger specific steps by mentioning their names.
- Environment: Specifies the default execution environment for jobs. It can refer to a set of cloud-based or on-premises machines, or a combination of both.
- Image: The Docker image that serves as the foundational environment for your step. This image contains essential software, libraries, and packages needed to run your code. For example, it might include Python 3.9, PyTorch, and various Python packages. The Docker image shouldn’t contain your own code or data.
- Command: You can specify the command to execute within the chosen environment. Typically, this command runs your machine learning training script or other tasks. You can define one or multiple commands (e.g.,
python train.py
,mkdir test
,apt-get install
,pip install
, etc.). - Inputs: Describes the necessary inputs for your experiment, such as datasets, models, or other files. Valohai downloads and caches these inputs, making them accessible in your code as if they were local files.
- Parameters: Enables you to set hyperparameters and other configurable settings for your experiment. These parameters can be easily adjusted during experimentation, allowing you to modify job configurations or perform hyperparameter tuning.
Example valohai.yaml
---
- step:
name: preprocess-dataset
image: python:3.9
command:
- pip install numpy valohai-utils
- python ./preprocess_dataset.py
inputs:
- name: dataset
default: https://valohaidemo.blob.core.windows.net/mnist/mnist.npz
- step:
name: train-model
image: tensorflow/tensorflow:2.6.0
command:
- pip install valohai-utils
- python ./train_model.py {parameters}
parameters:
- name: epochs
default: 5
type: integer
- name: learning_rate
default: 0.001
type: float
inputs:
- name: dataset
default: https://valohaidemo.blob.core.windows.net/mnist/preprocessed_mnist.npz
- step:
name: batch-inference
image: tensorflow/tensorflow:2.6.0
command:
- pip install pillow valohai-utils
- python ./batch_inference.py
inputs:
- name: model
- name: images
default:
- https://valohaidemo.blob.core.windows.net/mnist/four-inverted.png
- https://valohaidemo.blob.core.windows.net/mnist/five-inverted.png
- https://valohaidemo.blob.core.windows.net/mnist/five-normal.jpg
- pipeline:
name: training-pipeline
nodes:
- name: preprocess
type: execution
step: preprocess-dataset
- name: train
type: execution
step: train-model
override:
inputs:
- name: dataset
- name: evaluate
type: execution
step: batch-inference
edges:
- [preprocess.output.preprocessed_mnist.npz, train.input.dataset]
- [train.output.model*, evaluate.input.model]