The Compute and Data Layer of Valohai can be deployed to your GCP project. This enables you to:
- Use your own Virtual Machines instances to run machine learning jobs.
- Use your own Google Storage Bucket for storing training artifacts such as trained models, preprocessed datasets, visualizations, etc.
- Access databases and data warehouses directly from the workers, which are inside your network.
Valohai doesn’t have direct access to the virtual machine instances that execute the machine learning jobs. Instead, it communicates with a static virtual machine in your GCP project that’s responsible for storing the job queue, job states, and short-term logs.
Deploying resources
You can easily deploy a Valohai to a fresh GCP Project using the provided Terraform template.
Find the Terraform scripts in our public GitHub repository
Note
Make sure you have enough quota for both vCPUs and GPUs on your GCP account. You can read more about quotas on GCP’s documentation here.