Image Inference Example

Classify images in batches using a pre-trained MNIST model. This example processes ten images with TensorFlow 2.4.1.

Batch inference runs as a standard Valohai execution, which means you can schedule it, trigger it via API, or chain it in pipelines.

What you'll need

Two files are available in public storage:

  • Model: s3://valohai-public-files/tutorials/batch-inference/image-batch-inference/model.h5

  • Images: s3://valohai-public-files/tutorials/batch-inference/image-batch-inference/images.zip

Valohai will fetch these automatically when you run the job.

Inference code

This script loads a model, processes images from a zip file, and outputs predictions as JSON.

import json
import os

import numpy as np
from PIL import Image
import tensorflow as tf

# Load the model from Valohai inputs
model = tf.keras.models.load_model("/valohai/inputs/model/model.h5")


def load_image(image_path):
    """Load and preprocess an image for MNIST."""
    image_name = os.path.basename(image_path)
    image = Image.open(image_path)
    image.load()

    # Resize to 28x28 and convert to grayscale
    image = image.resize((28, 28)).convert("L")
    image_data = np.array(image).reshape(1, 28, 28)
    image_data = image_data / 255.0  # Normalize

    return (image_name, image_data)


def run_inference(image):
    """Run prediction and print as Valohai metadata."""
    image_name, image_data = image
    prediction = np.argmax(model.predict(image_data))

    # Print as Valohai metadata for tracking
    print(
        json.dumps(
            {
                "image": image_name,
                "inferred_digit": str(prediction),
            },
        ),
    )

    return {
        "image": image_name,
        "inferred_digit": str(prediction),
    }


# Process all images
results = []
for filename in os.listdir("/tmp/images/"):
    filepath = os.path.join("/tmp/images/", filename)
    results.append(run_inference(load_image(filepath)))

# Save aggregated results to Valohai outputs
with open("/valohai/outputs/results.json", "w") as f:
    json.dump(results, f)

Highlighted lines:

  • Line 9: Load model from Valohai input path

  • Lines 26-29: Print metadata that Valohai tracks

  • Line 41: Save results to Valohai output path

Define the step

Add this to your valohai.yaml:

About the commands:

  • System dependencies (unzip) install first

  • Python packages (pillow) install next

  • Your script runs last

About inputs:

  • filename forces a specific name (useful when scripts expect exact filenames)

  • You can point default to any cloud storage URL

Run the inference

Execute from your terminal:

The --watch flag streams logs to your terminal as the job runs.

Check your results

Find outputs in the Outputs tab:

  • results.json contains all predictions

  • Execution metadata shows per-image predictions

Next: Learn how to schedule recurring inference or process your own images.

Last updated

Was this helpful?