In this how-to guide, you will see an example of batch inference with images. We will utilize TensorFlow 2.4.1 to run inference on ten unique images with a pre-trained MNIST model.
Data and model
We’ll require two essential files for this tutorial:
-
MNIST Model: A pre-trained model created with TensorFlow 2.4.1. You can access it here:
s3://valohai-public-files/tutorials/batch-inference/image-batch-inference/model.h5
. -
Images: A set of images to classify, conveniently packaged in a zip file. You can access it here:
s3://valohai-public-files/tutorials/batch-inference/image-batch-inference/images.zip
.
You don’t need to download these files separately; they are readily available for your job at the provided locations.
Inference code
import json
import os
import numpy as np
from PIL import Image
import tensorflow as tf
# Load the model from Valohai inputs
model = tf.keras.models.load_model('/valohai/inputs/model/model.h5')
def load_image(image_path):
image_name = os.path.basename(image_path)
image = Image.open(image_path)
image.load()
image = image.resize((28, 28)).convert('L')
image_data = np.array(image).reshape(1, 28, 28)
image_data = image_data / 255.0
return (image_name, image_data)
def run_inference(image):
image_name, image_data = image
prediction = np.argmax(model.predict(image_data))
print(json.dumps({
"image": image_name,
"inferred_digit": str(prediction)
}))
return {
'image': image_name,
'inferred_digit': str(prediction),
}
results = []
for filename in os.listdir('/tmp/images/'):
filepath = os.path.join('/tmp/images/', filename)
results.append(run_inference(load_image(filepath)))
with open('/valohai/outputs/results.json', 'w') as f:
json.dump(results, f)
Define a valohai.yaml
A batch inference is ran as a standard Valohai step and it’ll be run as an execution.
- step:
name: batch-inference
image: tensorflow/tensorflow:2.4.1
command:
- apt-get update
- apt-get install unzip -y
- unzip /valohai/inputs/images/images.zip -d /tmp/images/
- pip install pillow
- python batch_inference.py
inputs:
- name: model
default: s3://valohai-public-files/tutorials/batch-inference/image-batch-inference/model.h5
filename: model.h5
- name: images
default: s3://valohai-public-files/tutorials/batch-inference/image-batch-inference/images.zip
Run from the command-line
Now you can execute your inference job:
vh execution run batch --adhoc --open-browser
If everything went according to plan, you can now preview the results in the Outputs tab.