Image Inference Example
Classify images in batches using a pre-trained MNIST model. This example processes ten images with TensorFlow 2.4.1.
Batch inference runs as a standard Valohai execution, which means you can schedule it, trigger it via API, or chain it in pipelines.
What you'll need
Two files are available in public storage:
Model:
s3://valohai-public-files/tutorials/batch-inference/image-batch-inference/model.h5Images:
s3://valohai-public-files/tutorials/batch-inference/image-batch-inference/images.zip
Valohai will fetch these automatically when you run the job.
Inference code
This script loads a model, processes images from a zip file, and outputs predictions as JSON.
import json
import os
import numpy as np
from PIL import Image
import tensorflow as tf
# Load the model from Valohai inputs
model = tf.keras.models.load_model('/valohai/inputs/model/model.h5')
def load_image(image_path):
"""Load and preprocess an image for MNIST."""
image_name = os.path.basename(image_path)
image = Image.open(image_path)
image.load()
# Resize to 28x28 and convert to grayscale
image = image.resize((28, 28)).convert('L')
image_data = np.array(image).reshape(1, 28, 28)
image_data = image_data / 255.0 # Normalize
return (image_name, image_data)
def run_inference(image):
"""Run prediction and print as Valohai metadata."""
image_name, image_data = image
prediction = np.argmax(model.predict(image_data))
# Print as Valohai metadata for tracking
print(json.dumps({
"image": image_name,
"inferred_digit": str(prediction)
}))
return {
'image': image_name,
'inferred_digit': str(prediction),
}
# Process all images
results = []
for filename in os.listdir('/tmp/images/'):
filepath = os.path.join('/tmp/images/', filename)
results.append(run_inference(load_image(filepath)))
# Save aggregated results to Valohai outputs
with open('/valohai/outputs/results.json', 'w') as f:
json.dump(results, f)Highlighted lines:
Line 9: Load model from Valohai input path
Lines 26-29: Print metadata that Valohai tracks
Line 41: Save results to Valohai output path
Define the step
Add this to your valohai.yaml:
- step:
name: batch-inference
image: tensorflow/tensorflow:2.4.1
command:
- apt-get update
- apt-get install unzip -y
- unzip /valohai/inputs/images/images.zip -d /tmp/images/
- pip install pillow
- python batch_inference.py
inputs:
- name: model
default: s3://valohai-public-files/tutorials/batch-inference/image-batch-inference/model.h5
filename: model.h5
- name: images
default: s3://valohai-public-files/tutorials/batch-inference/image-batch-inference/images.zipAbout the commands:
System dependencies (
unzip) install firstPython packages (
pillow) install nextYour script runs last
About inputs:
filenameforces a specific name (useful when scripts expect exact filenames)You can point
defaultto any cloud storage URL
Run the inference
Execute from your terminal:
vh execution run batch-inference --adhoc --watchThe --watch flag streams logs to your terminal as the job runs.
Check your results
Find outputs in the Outputs tab:
results.jsoncontains all predictionsExecution metadata shows per-image predictions
Next: Learn how to schedule recurring inference or process your own images.
Last updated
Was this helpful?
