Kubernetes Shell Access

For Valohai executions running on Kubernetes, you can open debug shells using kubectl exec instead of SSH. This method uses your existing Kubernetes credentials and doesn't require SSH key setup.

When to use this:

  • Your Valohai environment runs on Kubernetes

  • You need quick shell access without SSH configuration

  • You're a platform engineer with cluster access

Differences from SSH debugging:

  • No SSH keys or firewall rules required

  • Uses Kubernetes RBAC for authentication

  • Direct kubectl access to pods

  • Cannot attach IDE debuggers (use SSH methods for that)

Prerequisites

You need cluster-level access to use kubectl:

1. Kubernetes Cluster Access

Configure kubectl to reach your cluster's control plane. This typically means:

  • Client certificate authentication

  • Service account with access token

  • Kubeconfig file properly configured

Verify access:

kubectl get pods -n valohai-workers

2. Required RBAC Permissions

Your Kubernetes user needs permissions in the valohai-workers namespace:

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: valohai-workers
  name: debug-executions
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get"]
- apiGroups: [""]
  resources: ["pods/exec"]
  verbs: ["create"]

What this enables:

  • get pods - List and view execution pods

  • create pods/exec - Open shell sessions

💡 Contact your Kubernetes administrator if you don't have these permissions.

Open Shell to Execution

1. Start a Valohai Execution

Launch any execution using your Kubernetes environment. No special flags needed - SSH is not required for this method.

2. Get Pod Name from Logs

Find the unique pod name in the execution logs:

The logs will show something like:

Pod name: valohai-exec-a1b2c3d4e5f6
Namespace: valohai-workers
Container: workload

Copy the pod name - you'll use it in the next step.

3. Execute Shell Command

kubectl exec -it <POD-NAME> -c workload --namespace valohai-workers -- bash

Example:

kubectl exec -it valohai-exec-a1b2c3d4e5f6 -c workload --namespace valohai-workers -- bash

Command breakdown:

  • -it - Interactive terminal

  • <POD-NAME> - Your execution's unique pod name

  • -c workload - Container name (always workload for Valohai)

  • --namespace valohai-workers - Namespace (always valohai-workers)

  • bash - Shell to open (can use sh if bash unavailable)

What You Can Do

Once connected, you have shell access inside the execution container:

Inspect execution state:

# View running processes
ps aux

# Check Python packages
pip list

# Examine logs
cat /valohai/logs/execution.log

# View mounted data
ls /valohai/inputs

Debug code:

# Navigate to your repository
cd /valohai/repository

# Run Python interactively
python

# Test imports
python -c "import your_module"

Monitor resources:

# Memory usage
free -h

# Disk usage
df -h

# GPU status (if applicable)
nvidia-smi

Limitations

What you cannot do with kubectl exec:

  • Attach IDE debuggers (use VS Code or PyCharm with SSH)

  • Access multiple pods simultaneously from one command

  • Open tunnels to services (use SSH tunneling for this)

Comparison to SSH:

Feature
kubectl exec
SSH

Authentication

Kubernetes RBAC

SSH keys

Setup complexity

Low (if you have cluster access)

Medium (firewall + keys)

IDE debugger support

❌ No

✅ Yes

Port forwarding

❌ No

✅ Yes

Use case

Quick inspection

Interactive debugging

Common Issues

Permission denied?

  • Verify you have get pods and create pods/exec permissions

  • Check you're using the correct namespace: valohai-workers

Pod not found?

  • Ensure execution is still running (not completed/failed)

  • Verify pod name copied correctly from logs

  • Check you're connected to the correct cluster

Container 'workload' not found?

  • This is rare - contact Valohai support if you see this

  • Container name should always be workload

Execution exits too quickly?

  • Add sleep 1h to your execution command to keep it alive:

  command:
    - python train.py
    - sleep 1h

Last updated

Was this helpful?