Pipeline Conditions and Actions
Create intelligent pipelines that adapt to runtime conditions. Stop execution when models underperform, require human approval before deployment, or branch logic based on metrics.
Why use conditions?
Without conditions, pipelines run every step regardless of results. With conditions, you can:
Prevent bad deployments: Stop if model accuracy drops below threshold
Save compute costs: Skip expensive steps when unnecessary
Add safety checks: Require human review before critical operations
Create dynamic workflows: Different paths based on data characteristics
Action structure
Every action has three parts:
actions:
- when: node-complete # Trigger event
if: metadata.accuracy >= 0.95 # Condition (optional)
then: stop-pipeline # Action to takeWhen: Trigger events
node-starting: Before a node begins executionnode-complete: After successful completionnode-error: When a node fails
If: Conditions
Conditions compare values using operators:
Comparison:
>,>=,<,<=,==,!=Sources:
metadata.key,parameter.nameValues: Numbers, strings, booleans
Then: Actions
stop-pipeline: Halt entire pipeline executionrequire-approval: Pause until human approves
Common patterns
Quality gates
Stop pipeline if model doesn't meet standards:
- pipeline:
name: model-training-with-gates
nodes:
- name: train
type: execution
step: train-model
actions:
- when: node-complete
if: metadata.val_accuracy < 0.90
then: stop-pipeline
- name: deploy
type: execution
step: deploy-modelThe model only deploys if validation accuracy exceeds 90%.
Human-in-the-loop approval
Require manual review before critical operations:
- pipeline:
name: production-deployment
nodes:
- name: staging-tests
type: execution
step: run-integration-tests
- name: production-deploy
type: execution
step: deploy-to-production
actions:
- when: node-starting
then: require-approvalConditional processing
Different actions based on data characteristics:
- pipeline:
name: adaptive-processing
nodes:
- name: analyze-data
type: execution
step: data-analysis
actions:
- when: node-complete
if: metadata.sample_count < 1000
then: stop-pipeline # Too few samples
- name: train-complex-model
type: execution
step: train-deep-modelMulti-condition example
Combine multiple conditions for complex logic:
- pipeline:
name: comprehensive-ml-pipeline
nodes:
- name: preprocess
type: execution
step: prepare-data
actions:
- when: node-complete
if: metadata.missing_data_pct > 0.3
then: stop-pipeline # Too much missing data
- name: train
type: execution
step: train-model
actions:
- when: node-complete
if: metadata.f1_score < 0.85
then: stop-pipeline # Performance too low
- name: validate
type: execution
step: validate-model
actions:
- when: node-complete
if: metadata.bias_detected == true
then: require-approval # Human review for bias
- name: deploy
type: execution
step: deploy-model
actions:
- when: node-starting
then: require-approval # Always approve production deploysWorking with metadata
Generate metadata in your code for conditions:
import json
# Training script
accuracy = model.evaluate(X_test, y_test)
print(json.dumps({
"val_accuracy": accuracy,
"model_size_mb": model_size / 1024 / 1024,
"training_time_minutes": training_time / 60
}))Use in conditions:
actions:
- when: node-complete
if: metadata.model_size_mb > 100
then: require-approval # Review large modelsHandling approvals
When a pipeline requires approval:
Email notification sent to project members
Pipeline pauses at the approval point
Review interface shows:
Node outputs and logs
Metrics that triggered approval
Approve/Reject buttons
Decision logged with timestamp and user
Best practices
1. Log decision context
# Help reviewers understand the approval request
if requires_manual_review:
print("=== APPROVAL REQUIRED ===")
print(f"Accuracy: {accuracy:.3f} (threshold: 0.95)")
print(f"False positive rate: {fp_rate:.3f}")
print(f"Dataset: {dataset_version}")2. Use descriptive metadata keys
# Unclear
print(json.dumps({"val": 0.87}))
# Self-documenting
print(json.dumps({"validation_auc_score": 0.87}))Common issues
Condition never triggers
Check metadata is valid JSON:
# Wrong - not JSON
print(f"Accuracy: {acc}")
# Correct - valid JSON
print(json.dumps({"accuracy": acc}))Approval emails not sent
Ensure project members have notification settings enabled in their profile and the project settings have been configured to send notifications.
Pipeline stops unexpectedly
Add logging before metadata output:
metrics = {"accuracy": acc}
print(f"DEBUG: Outputting metrics: {metrics}")
print(json.dumps(metrics))Last updated
Was this helpful?
