Announcements
Stay up to date with the latest Valohai features and improvements. Explore new capabilities, enhancements, and updates designed to streamline your ML development workflow.
Interactive Terminal: Debug Your Running Executions in Real-Time
Valohai now supports running terminal commands while an execution is running, giving you direct access to your execution environment without interrupting your workflow. When something goes wrong during a long-running training job, you can jump into the running execution to inspect processes, check GPU utilization, examine file outputs, or validate your Docker environment in real-time. No need to wait for failures, restart from scratch, or guess what went wrong from logs alone.
Use cases
Debug data loading issues before they derail your entire training run
Monitor GPU utilization and memory usage in real-time
Verify Docker configurations and dependencies are correctly installed
Test commands and file paths before incorporating them into your execution steps
Inspect model checkpoints as they're being saved during training
How it works
Set the VH_INTERACTIVE environment variable to true or 1 in your execution. This adds an input field to the terminal in the Log tab, where you can send commands directly to your running execution. Your code needs to be set up to handle incoming commands.
Learn more
Embed Rich Content Directly in Your Executions
Valohai executions now support embedding external content directly in the execution view. While you could always link to dashboards and visualizations from tools like Weights & Biases, you can now embed them inline. This means that your team can see training curves, monitoring dashboards, and experiment results without leaving Valohai or opening multiple tabs. This keeps your ML experiment context consolidated in one place, improving reproducibility and making it easier for your team to understand what was tried and what the results were.
Use cases
Embed for example Weights & Biases dashboards showing training curves and model performance metrics
Display TensorBoard visualizations directly in your execution view
Include production monitoring dashboards for deployed models
Show experiment tracking tools like MLflow or Neptune alongside execution logs
Add custom visualization dashboards built with Plotly, Streamlit, or similar tools
How it works
Add a Service Button to your execution by printing a command to stderr (not stdout):
print('::show-service-button::{"url":"https://dashboard.example.com/","name":"Dashboard","style":"embed"}', file=sys.stderr)Learn more
Compare Model Predictions Side-by-Side with Image Comparison
Valohai now includes image comparison capabilities that let you visually compare model predictions across different executions. Whether you're evaluating object detection bounding boxes or segmentation masks, you can overlay predictions from different model versions, adjust colors and opacity, and quickly identify performance improvements or regressions. Save metadata with your images to automatically group related outputs across executions, or manually stack specific images for detailed comparison. All directly in Valohai without downloading files or switching tools.
Use cases
Compare object detection performance between model versions to identify reduced false positives
Evaluate segmentation quality improvements across training iterations
Review predictions on validation sets to spot edge cases or failure modes
Compare multiple model architectures on the same test images
Navigate through large prediction sets efficiently using grouped comparisons
How it works
Save metadata with your output images to define groups for automatic organization. Select the executions you want to compare in the Executions view and click the Compare button. You can then overlay images, adjust visualization settings, and navigate through grouped predictions to evaluate model performance.
Learn more
Last updated
Was this helpful?
