Model Artifacts & Versioning

Advanced model versioning patterns, automated deployment workflows, and integration with production systems.


Overview

This guide covers:

  • Versioning strategies for different ML workflows

  • Automated deployment on approval

  • Model artifact management

  • Production deployment patterns

  • Governance through Model Hub


Versioning Strategies

Development vs. Production Versions

Use tags and states to separate development from production:

# Development/experiment version
metadata = {
    "model.pkl": {
        "valohai.model-versions": [
            {
                "model_uri": "model://churn-model/",
                "model_version_tags": ["experiment", "feature-test", "dev"],
                "model_release_note": "Testing new feature engineering",
            },
        ],
        "experiment_id": "exp-042",
        "status": "experimental",
    },
}

# Production candidate version
metadata = {
    "model.pkl": {
        "valohai.model-versions": [
            {
                "model_uri": "model://churn-model/",
                "model_version_tags": ["production-candidate", "validated"],
                "model_release_note": "Ready for staging deployment - passed all quality gates",
            },
        ],
        "validation_passed": True,
        "quality_score": 0.95,
    },
}

Workflow:

  1. Create development versions with experiment tag (stay in Pending)

  2. Best experiment → Retag as production-candidate

  3. Validate → Approve

  4. Deploy to production


Semantic Versioning Pattern

Organize versions with semantic meaning:


Environment-Specific Versions

Maintain separate version tracks for different environments:


Model Artifact Management

Multi-File Model Packages

Package models with all required artifacts:

Deployment: All files download together:


Framework-Specific Artifacts

TensorFlow/Keras:

PyTorch:


ONNX Export for Deployment

Export to ONNX for cross-framework deployment:


Production Deployment Patterns

Batch Inference

Use approved models for scheduled batch predictions:

valohai.yaml:

Schedule: Run daily at 2 AM to generate predictions for customer success team.


Real-Time Serving (External)

Export model for external serving platform:


Last updated

Was this helpful?