Currently, we have two defined steps in your valohai.yaml file, one for training and another for inference. We have successfully executed them separately and can continue to do so.
However, we can also establish a connection between them within a pipeline, where the training job runs first, followed by the inference job. This entire pipeline setup is fully defined in the
valohai.yaml configuration file.
- Nodes represent individual jobs in the pipeline, such as executions.
- Edges define how data or parameters flow between these nodes.
Define a pipeline in valohai.yaml
Add a pipeline definition to your
- pipeline: name: train-and-predict nodes: - name: train step: yolo type: execution - name: inference step: inference type: execution override: inputs: - name: model - name: images default: https://ultralytics.com/images/bus.jpg edges: - [train.output.best.onnx, inference.input.model]
Run from the command-line
You can run the pipeline job from the command-line
vh pipeline run train-and-predict --adhoc
Visit the user interface to access your pipeline. You can click on the boxes in the graph to view the details of each job (number 1 in the picture below). Once the training job finishes, the inference job will commence, with the model passed between them. You can find the Log and Outputs of the pipeline from the right upper corner (number 2 in the picture below).