Some organizations require that worker instances do not have public IPs. In such cases there is a NAT (Network Address Translation) gateway set up on the edge of the network (e.g. VPC) to provide outgoing Internet access for the machines. When it comes to accessing the instances from outside the network, including debugging Valohai jobs, the user needs to either do this from within the same network (e.g. with a bastion machine) or there needs to be some other way to provide the connection.
This article covers setting up a reverse proxy server with the frp package at the edge of the VPC. The Valohai agent software can be configured to download and use an frp
client to set up tunnels through the proxy server.
Jump host setup
Start by setting up a small (for example AWS t2.micro) in your VPC. Note that this machine needs to have a public IP. Set up the firewall rules to allow the following access:
- A frps port, for example 7000, from the security group of your Valohai workers (probably called something like
valohai-sg-workers
)- Make sure that this port is not publicly accessible as there is no access control otherwise.
- A port range exposed to the internet (or the IP range(s) of your users), for example 10000-50000.
- These ports will be used by the users to connect to the Valohai jobs.
- Temporary SSH access (port 22) from your own IP or access via for example AWS Session Manager.
Connect to the jump host and install frps
on it.
mkdir -p /opt/bin
cd /opt/bin
wget https://dist.valohai.com/frp/frp_0.61.0_linux_amd64/frps.gz
chmod a+x frps
Create a service file for frps
. You can use systemctl edit --force --full frps.service
to open an editor for a new service unit file where you should add the following content.
[Unit]
Description=frp server
After=network.target
[Service]
ExecStart=/opt/bin/frps --log-level=trace
Restart=on-failure
[Install]
WantedBy=multi-user.target
Run and enable the frps
service.
# Reload the service files.
sudo systemctl daemon-reload
# Enable the service to start when the machine is booted, and start it right away.
sudo systemctl enable --now frps
# Check the status for the service.
sudo systemctl status frps
Send the following information to your Valohai contact to update the setup for the worker instances:
- Jump host public and private IPs
- The frps port you set for the workers (e.g. 7000)
- The port range with public access (e.g. 10000-50000)
If you are managing the workers yourself, add the following into your prep template in the extra-config-json
and rerun the setup script. Note that this is in addition to any other values you might already have in your extra-config-json
.
"PEON_PORT_FORWARDING_CONFIG": "type=frp,server=<jump-host-private-IP>:<frps-port>,server_public=<jump-host-public-IP>,port_range=<port-range>"
Optionally, if you have a static machine in the same network where you have manually installed the Valohai worker, you can set the following in the /etc/peon.config
file.
PORT_FORWARDING_CONFIG=type=frp,server=<jump-host-private-IP>:<frps-port>,server_public=<jump-host-public-IP>,port_range=<port-range>
Accessing the Valohai jobs
After the jump host setup has been done and the Valohai workers updated, you can follow the instructions for your IDE of choice to setup the debugger connection. Note that even though you will need to define the debugger port in the CLI command or in the UI, the port you’ll need to connect to will be different in this setup. The actual port is indicated in the Valohai execution logs.