Triggers
Triggers are Valohai's native automation feature. They launch specific, pre-configured workloads when given conditions are met, no custom code required.
Triggers can launch executions, pipelines, fetch repository updates, and more.
Why Use Triggers?
Secure by default: Built-in authentication with flexible options (HMAC, JWT, static tokens)
No infrastructure needed: No servers to maintain, no polling loops, no message queues
Rate limiting included: Protect against accidental runaway costs
Visual configuration: Set up complex automation through the Valohai UI
Audit trail: Every trigger launch is logged for debugging and compliance
Three Types of Triggers
Scheduled Triggers
Run workloads on a recurring calendar schedule or cron expression.
Use cases:
Retrain models nightly with fresh data
Generate weekly performance reports
Run batch predictions every hour
Archive old datasets on the first of each month
Learn about Scheduled Triggers →
Notification Triggers
Launch workloads automatically when Valohai events occur.
Use cases:
Process new dataset versions as soon as they're uploaded
Deploy models immediately after approval
Trigger validation when training completes
Supported events:
Dataset version created
Model latest approved version changes
Execution/pipeline status changes
And more...
Learn about Notification Triggers →
Webhook Triggers
Receive HTTP requests from external services to launch workloads on demand.
Use cases:
Trigger from Slack commands (
/train-model)Auto-fetch and run when code is pushed to GitHub
Process images when labeling is complete (V7, Label Studio)
Respond to custom application events
Integrate with CI/CD pipelines
Supports industry-standard authentication:
Hash-based Message Authentication Code (HMAC)
JSON Web Tokens (JWT)
Static secret tokens
Timestamp validation for replay attack protection
Learn about Webhook Triggers →
How Triggers Work
Every trigger has two components:
1. Conditions (When to Launch)
Conditions validate whether the trigger should fire. All conditions must pass for the trigger to launch.
Schedule conditions:
Cron expressions for flexible scheduling
Simple recurring options (daily, weekdays, monthly)
Webhook conditions:
Authentication (verify the request is legitimate)
Timestamp validation (prevent replay attacks)
Rate limiting (prevent excessive launches)
Payload filtering (only trigger for specific events)
Notification conditions:
Payload filtering (only trigger for specific datasets/models)
2. Actions (What to Launch)
Actions define what happens when all conditions pass.
Available actions:
Run an execution
Run a pipeline
Copy a pipeline
Fetch repository updates
Return custom webhook responses
You can chain multiple actions in a single trigger.
Common Patterns
Pattern: Scheduled Training Pipeline
Goal: Retrain your model every night at 2 AM UTC
Setup:
Create trigger with type "Scheduled"
Add condition: Cron schedule
0 2 * * *Add action: Run pipeline "nightly-training"
Save trigger
Result: Pipeline launches automatically every night. No manual intervention needed.
Pattern: Auto-Process New Data
Goal: When a dataset version is created, automatically run preprocessing
Setup:
Create trigger with type "Notification"
Add action: Run pipeline "data-preprocessing"
Configure payload input to receive dataset version URI
In notifications settings, route "dataset version created" events to this trigger
Save trigger
Result: Upload data → pipeline launches automatically → processed data ready for training.
Pattern: Slack-Triggered Inference
Goal: Team members can trigger inference from Slack with /predict <image-url>
Setup:
Create Slack app with slash command
Create trigger with type "Webhook"
Add authentication condition (HMAC with Slack signing secret)
Add action: Run pipeline "inference"
Configure pipeline to receive image URL from Slack payload
Point Slack slash command URL to trigger webhook URL
Result: Type /predict https://... in Slack → inference runs → results posted back to Slack.
Trigger Management
Create triggers: Project Settings → Triggers → Create Trigger
Monitor trigger launches: Project Settings → Triggers → ... menu → View Logs
Debug failed triggers: Check trigger logs for condition failures and action errors
Disable temporarily: Edit trigger and uncheck "Enabled"
Triggers that repeatedly fail will auto-disable to prevent spam. Re-enable them after fixing the issue.
Triggers vs. REST API
Not sure whether to use triggers or the REST API? Here's a quick comparison:
Setup complexity
Visual UI configuration
Write custom code
Authentication
Built-in options
You implement it
Rate limiting
Included
You implement it
Event source
Valohai + webhooks
Anywhere
Flexibility
Pre-configured actions
Full programmatic control
Use when...
Standard automation needs
Custom integrations
You can combine both! Use triggers for common patterns, and the REST API when you need custom logic that triggers don't support.
Next Steps
New to triggers? Start with Scheduled Triggers for the simplest automation
Need event-driven workflows? Explore Notification Triggers
Building integrations? Check out Webhook Triggers
Last updated
Was this helpful?
