Mount on-premises network file systems to access existing data infrastructure directly from Valohai executions.
When to Use On-Premises NFS
On-premises NFS mounting serves a different purpose than cloud network storage:
Data Already Exists on Network Shares
Use when:
Large datasets already on corporate NFS servers
Legacy systems produce data on network shares
Multiple departments share data on existing file servers
Migrating terabytes of data is impractical
Example workflow:
Medical imagining on hospital NFS
Mount the volume to your execution
Process the data while meeting compliance requirements
Save results to outputs to start tracking them as datums
Everything is versioned and tracked for audit
Data Compliance Requirements
Use when:
Healthcare data must stay in hospital network (HIPAA)
Financial data has regulatory restrictions (PCI DSS, GDPR)
Government data cannot leave controlled environment
Corporate policy prohibits cloud data storage
Hybrid Cloud Strategy
Use when:
Transitioning gradually to cloud
Need access to both on-prem and cloud data
Want to keep sensitive data on-prem while using cloud compute
Cost optimization (avoid cloud storage costs for large static datasets)
Critical Trade-Off: Speed vs. Versioning
⚠️ Important: Valohai does NOT version or track files on mounted network storage.
What this means:
Files read from mounts: Not versioned
Files written to mounts: Not versioned
Files saved to /valohai/outputs/: Versioned ✅
Decision Tree: Should I Use NFS Mounts?
On-Prem NFS vs. Valohai Inputs
Feature
On-Prem NFS Mount
Valohai Inputs
Versioning
❌ No tracking
✅ Full versioning
Reproducibility
❌ Data can change
✅ Immutable references
Data location
✅ Stays on-premises
❌ Must be in cloud storage
Setup complexity
⚠️ Network + VPN config
✅ Simple
Speed
⚠️ Depends on network
✅ Fast (cloud-native)
Best for
Existing on-prem data, compliance
All other cases
Compliance
✅ Data never leaves premises
❌ Data moves to cloud
Recommended Pattern: Mount → Process → Save
Always save processed results to /valohai/outputs/ for versioning:
Why this matters:
Prerequisites
Before mounting on-premises NFS in Valohai:
Network connectivity — Valohai execution environments must reach your on-prem NFS server
VPN or Direct Connect — Secure connection between cloud and on-premises network
NFS server accessible — NFS service running and accessible from Valohai worker IPs
Firewall rules — Allow NFS traffic from Valohai workers
Mount permissions — NFS export configured to allow access from Valohai workers
Mount On-Premises NFS in Execution
Basic Mount Configuration
valohai.yaml:
For networked NFS server:
Parameters:
destination — Mount point inside container (e.g., /mnt/company-data)
source — NFS path (format: <server>:<export-path> or local mount path)
type — nfs when specifying remote server
readonly — true (recommended) or false
Mount Specific NFS Directory
Mounts only a specific subdirectory from your NFS server.
Use Environment Variables in Mount Configuration
Mount configurations support environment variable interpolation, letting you use secrets and runtime variables in your mount paths and credentials.
Example with CIFS mount using secrets:
Security tip: Always mark mount credentials as secrets in environment variable settings. This prevents them from appearing in UI but still makes them available to executions.
Is your data already on on-premises network storage?
├─ Yes → Does it need to stay on-prem (compliance)?
│ ├─ Yes → Use on-prem NFS mount
│ │ ⚠️ But save processed outputs to /valohai/outputs/
│ │
│ └─ No → Can you move it to cloud storage?
│ ├─ Yes → Use Valohai inputs (versioned, cached)
│ └─ No (too large) → Use on-prem NFS mount
│
└─ No → Use Valohai inputs (datum:// or dataset://)
✅ Versioned, reproducible, cached
import os
import pandas as pd
# 1. Read from on-prem network mount (NOT versioned)
onprem_path = "/mnt/company-data/raw_datasets/"
files = os.listdir(onprem_path)
print(f"Found {len(files)} files on on-prem storage")
# 2. Process the data
processed_data = []
for filename in files:
filepath = os.path.join(onprem_path, filename)
# Your preprocessing logic
data = preprocess_data(filepath)
processed_data.append(data)
# 3. Save processed results to Valohai outputs (VERSIONED ✅)
output_path = "/valohai/outputs/preprocessed_dataset.csv"
df = pd.DataFrame(processed_data)
df.to_csv(output_path, index=False)
print(f"Saved versioned dataset to {output_path}")
❌ Bad: Mount → Process → Write back to mount
(Nothing versioned, can't reproduce, no audit trail)
✅ Good: Mount → Process → Save to /valohai/outputs/
(Processed data versioned, reproducible, compliant)
# ❌ Avoid: Writeable for sensitive data
mounts:
- destination: /mnt/protected-data
readonly: false # Risk of data corruption
# ❌ Bad: Process on-prem data, write back to on-prem
data = process("/mnt/onprem-data/")
data.save("/mnt/onprem-output/") # NOT versioned, not auditable
# ✅ Good: Process on-prem data, save to Valohai outputs
data = process("/mnt/onprem-data/")
data.save("/valohai/outputs/results.csv") # VERSIONED, auditable
# Today: Process on-prem data
data = load("/mnt/onprem-data/")
model = train(data)
# Next month: On-prem data updated
# Can't reproduce original model
# No audit trail of what data was used
# Load from on-prem (current state)
data = load("/mnt/onprem-data/")
# Save snapshot to versioned outputs
data.to_csv("/valohai/outputs/training_snapshot.csv")
# Document source
metadata = {
"training_snapshot.csv": {
"valohai.dataset-versions": [
{
"uri": "dataset://medical-scans/2024-Q1",
},
],
"valohai.tags": ["on-premises", "hospital-data"],
"source": "hospital-nas.internal:/medical_scans",
"access_date": datetime.now().isoformat(),
"file_count": len(data),
},
}
# Save metadata
import json
with open("/valohai/outputs/valohai.metadata.jsonl", "w") as f:
for filename, file_meta in metadata.items():
json.dump({"file": filename, "metadata": file_meta}, f)
f.write("\n")
# Train on versioned snapshot in future executions
# Full audit trail maintained