Run Hyperparameter Sweeps
Once you've validated your training code in a notebook, you can launch hyperparameter sweeps to find optimal model configurations. This workflow uses Run Remote to create an execution, then converts it to a Task for parallel tuning.
Prerequisites
Before launching a sweep:
Your notebook should run top-to-bottom without manual intervention
Training logic should use parameters, not hardcoded values
Model performance should be logged as metadata
Prepare Your Notebook
Define your step configuration using valohai.prepare():
import valohai
valohai.prepare(
step='model-training',
image='python:3.9',
default_inputs={
'dataset': 's3://mybucket/data/train.csv'
},
default_parameters={
'learning_rate': 0.001,
'batch_size': 32,
'epochs': 10
}
)
# Your training code
learning_rate = valohai.parameters('learning_rate').value
batch_size = valohai.parameters('batch_size').value
epochs = valohai.parameters('epochs').value
# Train model and log metrics
for epoch in range(epochs):
# ... training loop ...
with valohai.metadata.logger() as logger:
logger.log('epoch', epoch)
logger.log('accuracy', accuracy)
logger.log('loss', loss)The parameters you define here become tunable in your Task.
Run Remote
Click the Run Remote button in Jupyter. Valohai executes your notebook end-to-end and creates a standard execution.
Wait for the execution to complete successfully. If it fails, debug in your notebook and try again.
Convert to Task
Once your execution succeeds:
Navigate to the completed execution in the Valohai UI
Click the Task icon (or Create Task button)
Select your Task Type:
Grid Search — test all parameter combinations
Random Search — sample parameter space randomly
Bayesian Optimization — intelligently explore parameter space
Configure the parameter ranges you want to test
Valohai launches multiple executions in parallel, each testing different parameter values. Compare results in the Task view to find the best configuration.
When to Use This Workflow
Good for:
Quick hyperparameter exploration on proven notebook code
Prototyping tuning strategies before moving to YAML
Better alternatives:
For production tuning workflows, define Tasks in
valohai.yamlFor complex multi-step pipelines with tuning, use pipeline-based tasks
For recurring tuning jobs, use scheduled executions
This notebook → task workflow is convenient for one-off experiments, but production tuning should use proper YAML definitions for versioning and reproducibility.
Next Steps
Tasks & Parallel Execution — Deep dive into hyperparameter optimization
Grid Search — Exhaustive parameter testing
Bayesian Optimization — Smart parameter exploration
Convert to Production Script — Move tuning logic to YAML
Last updated
Was this helpful?
