Here are some commonly asked questions about real-time deployments:
Why is the endpoints file path defined in valohai.yaml?
In your valohai.yaml configuration, you define the file path for endpoints. This file path serves a crucial purpose when creating deployment versions. Here’s why it matters:
- Commit, Endpoint, and Files: When you create a deployment version, you select three key components - the commit version, the endpoint, and the files.
- Predictable File Location: The “path” defined in valohai.yaml serves as a reference point for Valohai to save the selected files. Regardless of the original filenames, your code can always expect these files to be located at the specified path.
- Code Flexibility: This approach enhances code flexibility. For instance, if one version uses a file named “model_123x98.pb,” and another version uses “newmodel.pb,” you won’t need to modify your code. It can consistently access the file at the path defined in valohai.yaml.
How to launch script from a subfolder in FastAPI deployments?
If you’re using FastAPI make sure you’re using the correct path in the server-command of your endpoint in valohai.yaml.
server-command: myfolder.predict:app instead of myfolder/predict
Why is my deployment stuck in “Pending”?
A deployment is in the state “Deployment Pending” until it has been successfully deployed on your cluster and is ready to accept requests.
- View the logs of your deployment endpoint: The logs will show you if deployment is pending because there is a syntax error, or for example a missing Python package.
- View the Cluster Status tab: The cluster status will show if there are issues with running the deployment. A common issue is your loading a large model in-memory but the deployment doesn’t have enough memory reserved causing it to fail on an out-of-memory (OOM) exception.