Valohai utilizes Docker images to define your runtime environment. This means that the platform is capable of running any code from C to Python as long as it can run inside a Docker container.
Don’t store data or ML code in the Docker image
The Docker image should just include the environment needed to run your code. This means it should contain the libraries, packages and tools that your code relies on. Don’t include your data or code in the Docker image. Your code should come from Git, and your data from your data stores.
You can use any Docker image available online. After getting initial versions working, it makes sense to package your dependencies by building your own images.
Here are the most common Docker images currently used on the platform:
tensorflow/tensorflow:<VERSION>-gpu # e.g. 2.6.1-gpu, for GPU support
tensorflow/tensorflow:<VERSION> # e.g. 2.6.1, for CPU only
pytorchlightning/pytorch_lightning:latest-py<version>-torch<version> # e.g. py3.6-torch1.6
pytorch/pytorch:<VERSION> # e.g. 1.3-cuda10.1-cudnn7-runtime
python:<VERSION> # e.g. 3.8.0
r-base:<VERSION> # e.g. 3.6.1
julia:<VERSION> # e.g. 1.3.0
valohai/prophet # Valohai hosted image with Prophet
valohai/xgboost:1.4.2 # Valohai hosted image with xgboostvalohai/sklearn:1.0valohai/sklearn:0.24.2
Which images to use depend on your specific use-case, but it usually makes sense to:
- start with as minimal image as possible
- use a specific image tag (the : part) so everything stays reproducible