rssRSS feed

Announcements

Output Preview Grid for Compare Executions

Valohai's Compare Executions view now features an image grid tab that lets you visually compare image outputs side by side across executions. With synchronized hover states and filtering capabilities, you can quickly spot differences in generated images across different runs.

Use cases

  • Compare generated images across different model checkpoints or training runs

  • Review visual outputs from hyperparameter sweeps to identify trends

  • Quality-check computer vision model outputs across multiple test sets

  • Spot differences in generated images from experiment variations

  • Validate consistency of image outputs when testing model architecture changes

How it works

Navigate to the Compare Executions view and select the new Preview Grid tab. The grid displays image outputs from your selected executions in a synchronized layout, with each column representing one execution. As you hover over an image in one column, the corresponding images in other columns are highlighted, making it easy to compare the same output across different runs.

Use the filter field to narrow down which outputs are displayed in the grid, focusing on specific file name patterns. The grid supports image files that can be previewed directly in the browser.

Learn more

Image Preview Gridarrow-up-right

Execution Selector for Comparing Executions

The Compare Executions view now includes a dedicated execution selector sidebar that makes it easier to choose which executions to compare. With built-in filtering support, you can quickly find and switch between executions in your comparison workflows.

Use cases

  • Quickly swap out executions in your comparison without returning to the main executions list

  • Filter executions by status, date range, or other criteria while building your comparison

  • Compare different combinations of experiment runs to identify the best performers

  • Build comparison workflows that involve multiple iterations of selection and analysis

  • Maintain comparison context while exploring different execution combinations

How it works

When you enter the Compare Executions view, the execution selector sidebar appears alongside your comparison workspace. Use the filtering controls to narrow down the list of available executions, then select the ones you want to compare by checking their boxes. You can add or remove executions from your comparison without losing the current view state.

The selector maintains your filter settings as you work, making it easy to iterate through different comparison scenarios within the same workflow.

Learn more

Compare Executionsarrow-up-right

Grouped Metadata Plot

A new grouped chart view helps you compare distributions across experiment groups at a glance. This visualization combines box plots with a statistics table, supporting grouping by categorical metadata keys to reveal patterns and outliers in your experiment results.

Use cases

  • Compare model performance distributions across different hyperparameter configurations

  • Analyze training metrics grouped by dataset version or data split

  • Identify outliers and variance patterns when testing multiple model architectures

  • Visualize resource utilization statistics grouped by instance type or execution environment

  • Generate statistical summaries for A/B testing or multi-variant experiments

How it works

The grouped metadata plot automatically generates box plots for your numeric metadata, with each box representing the distribution of values within a group defined by categorical metadata keys. The accompanying statistics table shows key metrics like median, quartiles, and outliers for each group.

You can filter the visualization by metadata values and choose to group by either metadata keys or specific metadata values, bringing pivot-table-like interactivity to your experiment analysis. Access the grouped plot from any execution or datum browser view where metadata is available.

Learn more

Grouped Metadata Plotarrow-up-right

Transient Environment Variables

Valohai now supports transient environment variables that can be injected into job payloads without persisting them in your project configuration. This enables you to pass short-lived, secret-capable variables through the API while maintaining security best practices and keeping sensitive data out of your version-controlled configuration.

Use cases

  • Inject API keys or access tokens for external services that rotate frequently without updating project settings

  • Pass experiment-specific credentials that should not be shared across all executions

  • Manage temporary authentication tokens for CI/CD pipelines that trigger Valohai jobs

  • Provide user-specific or session-specific variables when programmatically creating executions

  • Handle sensitive configuration that shouldn't be stored in project YAML or visible in the UI

How it works

You can define transient environment variables via the Valohai API when creating executions or through the new editor in user and organization settings. These variables are injected directly into the job payload at runtime and are not persisted in your project configuration. They work alongside regular environment variables but are only available for the specific execution context where they're provided.

To use transient variables via the API, include them in your execution creation payload. To manage them through the UI, navigate to your user or organization settings where you'll find the new transient environment variables editor.

Learn more

Transient Environment Variablesarrow-up-right

Interactive Terminal: Debug Your Running Executions in Real-Time

Valohai now supports running terminal commands while an execution is running, giving you direct access to your execution environment without interrupting your workflow. When something goes wrong during a long-running training job, you can jump into the running execution to inspect processes, check GPU utilization, examine file outputs, or validate your Docker environment in real-time. No need to wait for failures, restart from scratch, or guess what went wrong from logs alone.

Use cases

  • Debug data loading issues before they derail your entire training run

  • Monitor GPU utilization and memory usage in real-time

  • Verify Docker configurations and dependencies are correctly installed

  • Test commands and file paths before incorporating them into your execution steps

  • Inspect model checkpoints as they're being saved during training

How it works

Set the VH_INTERACTIVE environment variable to true or 1 in your execution. This adds an input field to the terminal in the Log tab, where you can send commands directly to your running execution. Your code needs to be set up to handle incoming commands.

Learn more

Interactive Terminal documentationarrow-up-right

Embed Rich Content Directly in Your Executions

Valohai executions now support embedding external content directly in the execution view. While you could always link to dashboards and visualizations from tools like Weights & Biases, you can now embed them inline. This means that your team can see training curves, monitoring dashboards, and experiment results without leaving Valohai or opening multiple tabs. This keeps your ML experiment context consolidated in one place, improving reproducibility and making it easier for your team to understand what was tried and what the results were.

Use cases

  • Embed for example Weights & Biases dashboards showing training curves and model performance metrics

  • Display TensorBoard visualizations directly in your execution view

  • Include production monitoring dashboards for deployed models

  • Show experiment tracking tools like MLflow or Neptune alongside execution logs

  • Add custom visualization dashboards built with Plotly, Streamlit, or similar tools

How it works

Add a Service Button to your execution by printing a command to stderr (not stdout):

print('::show-service-button::{"url":"https://dashboard.example.com/","name":"Dashboard","style":"embed"}', file=sys.stderr)

Learn more

Service Buttons documentationarrow-up-right

Compare Model Predictions Side-by-Side with Image Comparison

Valohai now includes image comparison capabilities that let you visually compare model predictions across different executions. Whether you're evaluating object detection bounding boxes or segmentation masks, you can overlay predictions from different model versions, adjust colors and opacity, and quickly identify performance improvements or regressions. Save metadata with your images to automatically group related outputs across executions, or manually stack specific images for detailed comparison. All directly in Valohai without downloading files or switching tools.

Use cases

  • Compare object detection performance between model versions to identify reduced false positives

  • Evaluate segmentation quality improvements across training iterations

  • Review predictions on validation sets to spot edge cases or failure modes

  • Compare multiple model architectures on the same test images

  • Navigate through large prediction sets efficiently using grouped comparisons

How it works

Save metadata with your output images to define groups for automatic organization. Select the executions you want to compare in the Executions view and click the Compare button. You can then overlay images, adjust visualization settings, and navigate through grouped predictions to evaluate model performance.

Learn more

Image Comparison documentationarrow-up-right

Last updated

Was this helpful?