The Valohai Ecosystem database connectors allow you to easily run database queries on AWS Redshift, GCP BigQuery and Snowflake.
You can define the query in the Valohai user interface and it will be saved with other details related to the execution. The query results will be saved as execution outputs and will be automatically uploaded to your data store to be used in other jobs you run in Valohai. Like any other execution output, the result file will get a datum URL, which allows you to connect a datum alias to it (e.g. “datum://latest-model”)
You can authenticate to the database using environment variables in Valohai. For Redshift and BigQuery it is also possible to use machine identity (IAM roles) to authenticate the database connection.
Below, you’ll find instructions how to connect to your database on GCP BigQuery.
Requirements
- BigQuery workspace on your GCP account with imported data.
- GCP Service Account with BigQuery Data Viewer and BigQuery User roles.
- A way to authenticate with the database, either by:
- Keyfile for the service account
- The service account has been connected to your Valohai environments.
Add environment variables
You will need to define the following environment variables for the execution:
GCP_PROJECT
, your BigQuery cluster identifier.GCP_IAM
, 1 by default, set to 0 to use login with the keyfile instead of the service account attached to the worker.
If using login with the service account keyfile (i.e. GCP_IAM is set to 0), define also also:
GCP_KEYFILE_CONTENTS_JSON
, service account keyfile. Remember to save this as a secret
It is possible to add the environment variables separately to each execution while creating it in the user interface but we recommend saving them either under the project Settings or as an environment variable group on organization level (ask your organization admin for help with this).
If needed, you can edit the environment variables in the user interface before starting the execution.
Create a Valohai execution
In order to to create and execution with the BigQuery connector, follow the instructions below.
- Open your project
- Click on the Create Execution button under the Executions tab.
- Expand the step library by clicking on the plus sign next to valohai-ecosystem in the left side menu.
- Choose bigquery-query step under the Connectors.
- (Optional) Change the settings, such as the environment or Docker image, under the runtime section if needed.
- Write your SQL query into the field under Parameters sections.
- By default the query results will be saved as
results.csv
but you can also define some other path for the output. - (Optional) You can give the output file a datum alias, e.g. bigquery-query, to easily refer to it in your other executions with the input
datum://bigquery-query
.
- By default the query results will be saved as
- If you did not add save the environment variables under project Settings or as an environment variable group on organization level, add them under the Environment Variables section.
- You can edit and/or add new environment variables if needed.
- Click on Create Execution.