A self hoster’s guide to port forwarding and SSH tunnels

Self hosting with NAT and port forwarding and dynamic DNS is kinda fragile. I’ve been using a very cheap cloud-hosted nginx VPS to forward traffic to my self-hosted servers and it works nicely.

But tonight I set up a ssh tunnel that punches out from my server skipping the NAT, forwarding, and DNS stuff entirely. It’ll dial home from anywhere there’s network so I could even take my server to the park and it should work over 5g.

I just think that’s neat.

I’ve tried to explain a bit of my thinking, and a loose guide for how to set this up yourself. These instructions are for someone who’s vaguely familiar with nginx and ssh.

  1. How it usually works
  2. A more resilient port forwarding over ssh
  3. How to set up an nginx proxy to forward to your self hosted server
  4. How to forward ports to your self-hosted server over SSH
  5. How to set up a persistent SSH tunnel/port forward with systemd
  6. My observations using SSH tunneling

How it usually works

A typical port forwarding scenario opens ports on each device. When all the right ports are open, traffic flows all the way through from the internet to my self hosted server.

A traditional port forwarding scenario requires dyndns to upate the dynamic IP, as well as forwarding of ports through each device until it reaches the self-hosted server.

In my example, I have a nginx server on a cheap VPS in the cloud that handles forwarding. That VPS looks up my home IP address using a dynamic DNS service, then forwards traffic on port 80 to that IP. In turn my router is configured to forward traffic from port 80 on to the self hosted server on my network.

It works well, but that’s a lot of configuration:

  1. Firstly I need direct access to the ‘net from my ISP, whereas today most ISPs put you behind a carrier grade NAT by default.
  2. If my IP changes, there’s an outage while we wait for the DNS to update.
  3. If my router gets factory reset or replaced with a new one, I need to configure port forwarding again.
  4. Similarly, the router is in charge of assigning IPs on my LAN, so I need to ensure my self hosted server has a static IP.

A more resilient port forwarding over SSH

We can cut out all the router and dynamic DNS config by reversing the flow of traffic. Instead of opening ports to allow traffic into my network, I can configure my self-hosted server to connect out to the nginx server and open a port over SSH

You could also use a VPN, but I chose SSH because it works with zero config.

A self-hosted server creates a ssh tunnel to the remote server and routes traffic that way, without DynDNS or router configuration.

In this diagram, the self-hosted server makes a connection to the nginx server in the cloud via SSH. That ssh connection creates a tunnel that opens port 8080 on the nginx server, which forwards traffic to port 80 on the self hosted server. Nginx is then configured to forward traffic to http://localhost:8080, rather than port 80 on my router.

So the router doesn’t require any configuration, the cloud-hosted VPS server only needs to be configured once, and the dynamic dns server isn’t needed because the self-hosted server can create a direct tunnel to itself from wherever it is.

The huge benefit of this zero-config approach is I can move my self-hosted server to another network entirely and it will dial back into the nginx server and continue to work as normal.


How to set up a nginx server to forward to a self-hosted server

Putting an nginx server in front of your self-hosted stuff is a good idea because it reduces your exposure to scary internet risks slightly, and can also be used as a caching layer to cut down on bandwidth use.

In these examples, I’m forwarding traffic to localhost:8080 and 443 and will set up a SSH tunnel to forward that traffic later.

There are two ways to set up forwarding:

As a regular nginx caching proxy:

This is a good option when you want to utilise caching. However you’ll need to set up your letsencrypt certificates on the server.

server {
  server_name myserver.au
  location / {
    proxy_pass http://localhost:8080/;
    proxy_buffering off;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-Host $host;
    proxy_set_header X-Forwarded-Port $server_port;
  }
}

As a socket forwarding proxy

This option doesn’t proxy http traffic, it just forwards packets directly.

stream {
        server{
                listen 110.1.1.58:443;
                proxy_pass localhost:8080;

        }
        server {
                listen 110.1.1.58:80;
                proxy_pass localhost:8443;
        }
}

This method is easier for something like Coolify that deals with virtualhosts and ssl for you, but the downside is that there’s no caching, we can’t add an x-forwarded-for header, and it eats up an entire IP address. You can’t mix a socket forward with a regular proxy-pass.


How to forward ports to your self hosted server

First, generate SSH keys on your self-hosted server, and allow logins from your self hosted server to your nginx server. DigitalOcean has a guide to setting up ssh keys.

You can verify this is working by running ssh root@myNginxServer.au on your self hosted server and seeing it log in automatically without a password.

Then test your port forwarding with the following command:

ssh root@myNginxServer.au -R 8080:127.0.0.1:80 -R 8443:127.0.0.1:443

The -R argument opens port 8080 on the remote server, and forwards all traffic to port 80 on the local server. I’ve included two forwards in this command, for both http and https. The 127.0.0.1 address binds traffic to localhost, so only the local machine can forward traffic on these ports, but you could open it to the whole world with 0.0.0.0.


How to set up a persistent SSH tunnel/port forward with systemd

Then, create a systemd service to maintain the tunnel.

I borrowed these instructions from Jay Ta’ala’s notes and customised them to suit:

sudo vim /etc/systemd/system/ssh-tunnel-persistent.service

And paste:

[Unit]
Description=Expose local ports 80/443 on remote port 8080/8443
After=network.target
 
[Service]
Restart=on-failure
RestartSec=5
ExecStart=/usr/bin/ssh -NTC -o ServerAliveInterval=60 -o ExitOnForwardFailure=yes -R 8080:127.0.0.1:80 -R 8443:127.0.0.1:443 root@myNginxServer.au
 
[Install]
WantedBy=multi-user.target

You can then start the systemd service/ssh tunnel with:

# reload changes from disk after you edited them
sudo systemctl daemon-reload

# enable the service on system boot
sudo systemctl enable ssh-tunnel-persistent.service 

# start the tunnel
sudo systemctl start ssh-tunnel-persistent.service

My observations using SSH tunneling

If all is working, those steps should now be forwarding traffic to your self hosted server.

Initially this was difficult to set up because the vagueness of the docs for whether to use -L or -R, but once it was running it seems fine.

The systemd service works well for maintaining the connection and restarting it when it drops. I can reboot my nginx proxy and see the tunnel reestablish shortly afterward. My high level understanding is that when the tunnel breaks after ServerAliveInterval=60 seconds, the ssh command will realise the connection has dropped and terminate, then systemd restarts the service ad infinitum.

You can adjust the ssh command to suit. There’s probably not much point enabling compression because the traffic is likely to already be compressed. But you could tweak the timeouts to your preference.

The easiest way to validate email in React

Email validation is one of those things conventional frontend wisdom says to steer clear of, because:

  1. the endless complexity in how email works and
  2. because even if an email looks valid, it might not exist on the remote server.

This is backend territory.

However.

We can hijack the browser’s built-in email validation in <input type="email"> fields to give the user a better experience.


CSS validation

At the simplest, <input type="email"> fields expose :valid and :invalid pseudo selectors.

We can change the style of our inputs when the email is invalid:

input:invalid{
  border-color: red;
}

Nice. Zero lines of Javascript. This is by far the best way to do form validation in the year ${year}.


Plain Javascript email validation

The constraints validation API exposes all of this same built-in browser validation to Javascript.

To check if your email is valid in plain JS:

const emailField = document.querySelector('.myEmailInput');
const isValid = emailField.validity ? !emailField.validity.typeMismatch : true;

This works in all the latest browsers and falls back to true in those that don’t support the API.


React email validation hook

Lol just kidding, you don’t need a hook. Check this out:

const [emailIsValid, setEmailIsValid] = useState(false);

return <input
  type="email"
  onChange={e => setEmailIsValid(e.target.validity ? !e.target.validity.typeMismatch : true)}
  />

We can use the same native JS dom API to check the validity of the field as the user types.

This is by far the fastest way to validate email in react, requires almost no code, and kicks all the responsibility back to the browser to deal with the hard problems.


More form validation

Email validation is hard, and all browsers have slightly different validation functions.

But for all the complexity of email validation, it’s best to trust the browser vendor implementation because it’s the one that’s been tested on the 4+ billion internet users out there.

For further reading, MDN has great docs for validating forms in native CSS and Javascript code.

You should also inspect the e.target.validity object for clues to what else you can validate using native browser functions.

It’s a brave new world, friends.

Isolating vlog speech using Krisp AI

On a steam train ride with my mum, she starts telling a story of the trains when she was young. So thinking quickly I whip out my phone, press record, and get her to hold it so I can actually record her voice over the background noise.

It comes out distorted to ever loving shit.

A shot of DaVinci Resolve 17, video editor, with a video and audio track

So this sucks. I have to go back to the original onboard camera mic but it’s SO loud with all the engine noise, cabin chatter, and clanking in the background. Even tweaking all the knobs, you can barely hear mum at all.

Are there any AI tools to isolate voice? I remembered I’ve been using Krisp at work to cut down on the construction noise from next door. Maybe if I run the audio through that…

So I set the sound output from my video editor to go through Krisp, plug in my recorder, and play it through. It’s tinny, it’s dropped some quieter bits, but it’s totally legible! Holy cow.

Now I’ve got an audio track of mum’s voice isolated from the carriage noise. I can mix it back together with the original to boost the voice portion and quieten down the rest. This is kinda a game changer for shitty vlog audio.

This is a pretty convoluted workflow, so it’s really only useful for emergencies like this. But I’m really happy that it managed to recover a happy little memory. And I hope one day Krisp (or someone else, I don’t mind) release either a standalone audio tool or a plugin for DaVinci Resolve.

As an aside, the Google Recorder app is officially off my christmas list. Any recommendations for a better one?

Why is it called pound sterling?

I was curious why the pound sterling was called such.

Turns out it’s from Latin, lībra pondō (“a pound by weight”), as in a pound of silver. Has the same roots as the lira.

The “£” symbol is a blackletter “L” for “libra”, similar to the imperial weight “lb”. 🤯

A google search converting 100 lbs to aud. 100 pound sterling is approx 180.15 Aussie Dollars at time of publication.

Occasionally when I’m converting currencies I’ll “convert 100 lbs to aud” for a laugh. Google understands I mean “gbp” and that’s always amused me. Now it feels more connected.

I wasn’t sure if this is one of those things Europeans know that was never taught in Australia, but it does seem pretty esoteric. And Shawn hadn’t heard it before so maybe not.

Source: couple of fascinating articles: