Time Series
Time series graphs show how your metrics change over time—watch loss decrease, accuracy improve, and learning rates adjust as training progresses.
This is the default visualization in Valohai. Any numeric metric you log automatically becomes plottable.
Quick Start
1. Log Metrics with a Step Counter
Print metrics with a progression counter (epoch, step, iteration):
import json
for epoch in range(num_epochs):
train_loss = train_epoch(model, train_loader)
val_loss = validate(model, val_loader)
print(json.dumps({
"epoch": epoch,
"train_loss": train_loss,
"val_loss": val_loss
}))2. Open Metadata Tab
Open your execution
Click the Metadata tab
Time series graphs appear automatically
3. Select Horizontal Axis
Default: Metrics plot against _time (when they were logged)
Better: Use your step counter for clearer progression
Click the Horizontal Axis dropdown
Select
epoch(orstep,iteration, etc.)Graph updates immediately
Customizing Your Graph
Add or Remove Metrics
To add metrics to the graph:
Look for the Vertical Axes section on the right
Click Add/remove
Select which metrics to visualize
Each metric appears as a line on the graph
To remove metrics:
Find the metric in the Vertical Axes list
Click the × or remove button
Metric disappears from the graph
Smooth Noisy Data
Training metrics often have noise. Use smoothing to see trends more clearly.
To apply smoothing:
Find the metric in the Vertical Axes section
Drag the Smoothing slider
Move right to increase smoothing
"No smoothing" appears when the slider is at zero
How it works: Smoothing applies a moving average to your data. The original data points remain, but the line becomes smoother.
Use Multiple Vertical Axes
Plot metrics with different scales on the same graph:
Example: Loss (0-1 range) and learning rate (0.0001-0.01 range)
To use left and right axes:
Each metric has a Vertical Axis Side control
Click < for left axis (default)
Click > for right axis
Text shows which axis is active: "Using left vertical axis" or "Using right vertical axis"
This prevents one metric from dwarfing another due to scale differences.
Chart Options
At the bottom of the controls, you'll find:
Logarithmic Vertical Scale Use log scale for the vertical axis. Useful when metrics span multiple orders of magnitude (e.g., learning rate decay from 0.1 to 0.0001).
Show Legend Toggle the legend that identifies each line. Uncheck to hide legend for a cleaner view.
One Color per Execution When comparing multiple executions, use the same color for all metrics from one execution. Makes it easier to track which execution each line belongs to.
Multiple Graphs in Tabs
Create separate tabs for different metric groups.
To create a new tab:
Click the + button next to visualization tabs
Type a name (e.g., "Loss Curves", "Accuracy", "Learning Rate")
Configure metrics for this tab independently
Use cases:
Loss tab: train_loss and val_loss together
Accuracy tab: train_accuracy and val_accuracy
Learning rate tab: Track LR schedule separately
Resource usage tab: GPU utilization, memory usage
Each tab can have different:
Horizontal axis selection
Smoothing settings
Vertical axis configurations
Logarithmic scale settings
Interactive Features
All graphs use Plotly, which provides rich interactivity:
Zoom
Click and drag on the graph to zoom into a specific region.
Use cases:
Examine early training behavior
Investigate sudden spikes or drops
Focus on final epochs for fine-grained convergence analysis
Pan
Hold Shift and drag to pan around the graph while zoomed in.
Reset View
Double-click anywhere on the graph to reset zoom and pan to the original view.
Hover for Values
Hover over any point to see:
Exact value
Horizontal axis value (epoch/step)
Metric name
Save as Image
Click the camera icon in the graph toolbar (top right) to download the graph as a PNG image.
Perfect for:
Including in reports
Sharing with teammates
Presentations
Common Patterns
Monitor Training vs. Validation
Plot train and validation metrics together to detect overfitting:
print(json.dumps({
"epoch": epoch,
"train_loss": train_loss,
"train_accuracy": train_acc,
"val_loss": val_loss,
"val_accuracy": val_acc
}))What to look for:
Healthy training: Train and val losses both decrease
Overfitting: Train loss decreases but val loss increases
Underfitting: Both losses remain high
Track Learning Rate Schedule
Log learning rate alongside loss to understand training dynamics:
print(json.dumps({
"epoch": epoch,
"learning_rate": optimizer.param_groups[0]['lr'],
"train_loss": train_loss
}))Visualization tip: Put learning rate on the right vertical axis (different scale than loss).
Multiple Metrics on Same Scale
Compare related metrics with similar scales:
print(json.dumps({
"epoch": epoch,
"precision": precision,
"recall": recall,
"f1_score": f1
}))All three metrics range from 0-1, so they plot well together on one axis.
Best Practices
Always Include a Step Counter
Graphs need a progression indicator:
# Good: Clear progression
print(json.dumps({
"epoch": epoch,
"loss": loss
}))
# Avoid: No progression
print(json.dumps({
"loss": loss # Will plot against _time, hard to interpret
}))Use Consistent Naming
Keep metric names identical across experiments for easy comparison:
# Good: Consistent
"train_loss"
"val_loss"
"test_loss"
# Avoid: Inconsistent
"training_loss"
"validation_loss"
"testLoss"Group Related Metrics
Log related metrics in the same JSON object so they share the same timestamp:
# Good: Same timestamp
print(json.dumps({
"epoch": epoch,
"train_loss": train_loss,
"val_loss": val_loss
}))
# ⚠️ Avoid: Separate prints (slightly different timestamps)
print(json.dumps({"epoch": epoch, "train_loss": train_loss}))
print(json.dumps({"epoch": epoch, "val_loss": val_loss}))Don't Over-Smooth
Smoothing hides detail. Use the minimum smoothing needed to see trends:
# For noisy batch-level metrics: More smoothing
# For epoch-level metrics: Less smoothing
# For final results: No smoothingUse Descriptive Names
Choose metric names that are self-explanatory:
# Good: Clear meaning
"train_accuracy"
"val_f1_score"
"learning_rate"
# Avoid: Cryptic abbreviations
"tr_acc"
"v_f1"
"lr"Troubleshooting
Metric Not Appearing in Horizontal Axis Dropdown
Cause: Metric has non-numeric values or missing values
Solution: Only consistent numeric metrics can be used as the horizontal axis. Use a simple counter:
# Works as horizontal axis
{"epoch": [1, 2, 3, 4, ...]}
{"step": [100, 200, 300, ...]}
# Won't work as horizontal axis
{"epoch": ["epoch_1", "epoch_2", ...]} # String
{"loss": [0.5, None, 0.3, ...]} # Has NoneGraph Shows Unexpected Jumps
Cause: Using _time as horizontal axis, but logging happened at irregular intervals
Solution: Switch to a step counter like epoch for consistent spacing:
Click Horizontal Axis dropdown
Select
epochorstep(assuming you have recorded this)
Lines Overlap and Hard to Distinguish
Cause: Too many metrics on one graph
Solutions:
Create multiple tabs for different metric groups
Use different vertical axes (left/right) for different scales
Remove less important metrics temporarily
Can't See Early Training Behavior
Cause: Graph auto-scales to show all data
Solution:
Zoom into the region of interest by clicking and dragging
Or create a separate tab with logarithmic scale
Download Data for External Analysis
Export your metrics for custom visualizations:
Click Download raw data (top right of Metadata tab)
Choose CSV or JSON
Use in Python, R, Excel, or other tools
Next Steps
Compare multiple executions to see how different settings affect training
Create confusion matrices for classification analysis
Compare output images from different runs
Back to Visualize Metrics overview
Last updated
Was this helpful?
