inputs are the data files that are available during step execution.

An input in inputs has four potential properties:





The input name; this is shown on the user interface and names the directory where the input files will be placed during execution like /valohai/inputs/my-input-name.



Marks that this input is optional and an URL definition is not necessary before execution of the step.



Set a custom name to the downloaded file.



  • none: (default) all files are downloaded to /valohai/inputs/myinput

  • full: keeps the full path from the storage root. For example s3://special-bucket/foo/bar/**.jpg could end up as /valohai/inputs/myinput/foo/bar/dataset1/a.jpg

  • suffix: keeps the suffix from the “wildcard root”. For example s3://special-bucket/foo/bar/* the special-bucket/foo/bar/ would be removed, but any relative path after it would be kept, and you might end up with /valohai/inputs/myinput/dataset1/a.jpg


Currently valid sources for inputs are HTTP(S) and various cloud provider specific data stores such as AWS S3 (s3://...), Azure Storage (azure://...), Google Cloud Store (gs://..).

See also

Read more about custom data stores from Data stores documentation page.

For these HTTP(S) endpoints basic access authentication is supported, but for the cloud provider stores, the access credentials must be configured under project settings.

During the step execution, inputs are available under /valohai/inputs/<input name>/<input file>. To see this in action, try running ls -la /valohai/inputs/ as the main command of execution which has inputs.


You can download any files you want during the execution with e.g. Python libraries or command-line tools but then your executions become slower as it circumvents our input file caching system.

When you specify the actual input or default for one, you have 3 options:

Option #1: Custom Store URL

You can connect private data stores to Valohai projects.

If you connect a store that contains files that Valohai doesn’t know about, like the files that you have uploaded there yourself, you can use the following syntax to refer to the files.

  • Azure Blob Storage: azure://{account_name}/{container_name}/{blob_name}

  • Google Storage: gs://{bucket}/{key}

  • Amazon S3: s3://{bucket}/{key}

  • OpenStack Swift: swift://{project}/{container}/{key}

This syntax also has supports wildcard syntax to download multiple files:

  • s3://my-bucket/dataset/images/*.jpg for all .jpg (JPEG) files

  • s3://my-bucket/dataset/image-sets/**.jpg for recursing subdirectories for all .jpg (JPEG) files

You can also interpolate execution parameter into input URIs:

  • s3://my-bucket/dataset/images/{parameter:user-id}/*.jpeg would replace {parameter:user-id} with the value of the parameter user-id during an execution.


If you are using your own data store, we show the exact location for each file through Data browser (2).

Where to find the file path in your data store.

Option #2: Datum URI

You can use the datum://<identifier> syntax to refer to specific files Valohai platform already knows about.

Files will have a datum identifier if the files were uploaded to Valohai either:

  1. by another execution, or

  2. by using the Valohai web interface uploader under “Data” tab of the project


Find the datum URL through the “datum://” button under “Data” tab of your project.

Where to find datum URL with identifier.

Option #3: Public HTTP(S) URL

If your data is available through an HTTP(S) address, use the URL as-is.