Tasks inside a pipeline come in handy when you need to run a hyperparameter optimization or parameter sweep as part of a pipeline.
All the outputs of the Task node will be passed to the next node.
How are outputs handled?
Note that currently, if all the outputs from your Task executions have the same file name, only one of them will be randomly picked and passed to the next node. To avoid this, make sure each output in a Task gets its own file name.
A pipeline can consist of executions, tasks or deployments. Below you’ll find an example of a pipeline where the train
node is defined as type: task
.
- pipeline:
name: Training Pipeline
nodes:
- name: preprocess
type: execution
step: Preprocess dataset (MNIST)
- name: train
type: task
step: Train model (MNIST)
on-error: stop-all
override:
inputs:
- name: training-set-images
- name: training-set-labels
- name: test-set-images
- name: test-set-labels
- name: evaluate
type: execution
step: Batch inference (MNIST)
edges:
- [preprocess.output.*train-images*, train.input.training-set-images]
- [preprocess.output.*train-labels*, train.input.training-set-labels]
- [preprocess.output.*test-images*, train.input.test-set-images]
- [preprocess.output.*test-labels*, train.input.test-set-labels]
- [train.output.model*, evaluate.input.model]
On-Error behavior
By default, the whole pipeline node will stop if a single job in the Task errors. You can change this default behavior by setting the on-error property for the node.
The options are:
stop-all
: This is the default behavior. If one execution is the Task node fails the whole node will be errored and the pipeline stopped.continue
: Continue executing the Task node, even if an execution inside the Task errors. The expectation is that at least one of the executions in the Task has been completed successfully.stop-next
: Stops only the nodes that follow the errored node.
Example
The below example shows a pipeline with two parallel task nodes.
- pipeline:
name: Training Pipeline
nodes:
- name: preprocess
type: execution
step: preprocess-dataset
- name: train
type: task
on-error: stop-next
step: train-model
override:
inputs:
- name: dataset
- name: evaluate
type: execution
step: batch-inference
- name: train2
type: task
on-error: continue
step: train-model
override:
inputs:
- name: dataset
- name: evaluate2
type: execution
step: batch-inference
edges:
- [preprocess.output.preprocessed_mnist.npz, train.input.dataset]
- [preprocess.output.preprocessed_mnist.npz, train2.input.dataset]
- [train.output.model*, evaluate.input.model]
- [train2.output.model*, evaluate2.input.model]
- train is defined with
on-error: stop-next
- train2 is defined with
on-error: continue
Each of the task nodes run 2 executions, and in each of them, one of the executions will fail.
Based on our on-error
rules:
- When an execution in
train
fails, it will fail the whole node and the pipeline won’t continue to the next nodes of that path. - Even if an execution in
train2
fails, the node will be marked as asuccessful
and we’ll continue to execute the next step in that path: theevaluate2
node.
Launch using the user interface
You can easily convert any existing execution node in a pipeline to a task node in the user interface.
- Open your project’s pipelines tab
- Create a new pipeline
- Select the right blueprint from the drop-down menu
- Click on a node that has parameters
- Click on Convert to task (below the graph)
- Scroll down to the Parameters section and configure your Task
- Create a pipeline