The One and the Many

httpd firewall

I've run an httpd on my home server for a long time. I use it to share small files with friends and run some simple sites (including this site for a while). It's nice having quick and easy access to an httpd. I can copy a file to make it available to someone in the same way as I copy a file from one directory to another.

However for a while now I've been feeling a little uncomfortable with it being a gateway into my home network. It is paranoia but I worry that if (when) there is a vulnerability found in the software running on it then my files would be vulnerable.

After mulling various alternatives I decided I really did want an httpd at home. The main reason is I don't want a lot of the data to be off site. I don't feel good about it being on any third party server from a privacy perspective. It would be convenient, sure, but not worth it to me.

The question then was how I could both run an httpd at home and prevent access to it from most of the internet? I eventually settled on just firewalling it off. Now by default I block every IP except for a whitelisted few I think should have access.

Maintaining a whitelist by hand would be tedious though. Especially when you start taking into account mobiles and dynamic IPs. I really didn't want to waste time maintaining the list. I came up with a solution that makes it a lot more automatic.

I run a small IRC network. Almost everyone I want to have access to my httpd goes on this network. I created a small bot that watches client connection notices to this network and records the IP of each client to a file (if it's not already there). This bot has operator status so it can see these notices. Normal users cannot see them.

I needed to modify the ircd I use, ircd-ratbox, to make these notices work for my purpose. Vanilla ratbox supports notifying operators about client connections, but only about clients connecting to the local server the operator is connected to. For an IRC network with multiple servers, operators will not know about client connections to remote servers. This means without modification my bot would only see some of the client connections to the network, and I would miss granting access to these missed clients.

To change this, I added a hook to ratbox when it introduces a client. I wrote a module to listen for this hook. It sends a message to each connected server when a client connects locally. When a server hears this message from another server, it reports a notice locally saying a client connected, and passes the message to its own connected servers (other than the server it received the message from). This means every server knows when a client connects to any other server.

After the bot was able to see all client connections, I wrote a daemon that watches the IP file and updates the iptables rules when the file changes. It uses inotify(7) to know when the file changes. Once a change happens, the program ensures all IPs in this file are permitted via iptables rules, and removes any IPs that are not in the file from access. Essentially it syncs iptables rules (for inbound ports 80 and 443) with those IPs listed in the file maintained by my IRC bot.

This means anyone connecting to IRC can immediately access my httpd. This is security through obscurity because there is no restriction on who can connect to the IRC network, but in practice there are zero random connections, so only people I know end up permitted. Exactly what I want. As well if I need access from a remote location I can pop on IRC and gain access.

A flaw with this system is that it is currently not easy to know when I can remove an IP from the list. I'm recording when I first see the IP, but not when it was last seen. Over time there will be many IPs that are not actively used by people I know (both due to travel and due to the dynamic nature of their IPs). I may end up flushing it periodically, or start tracking when an IP was last seen.

For now this is an improvement. Now I need to think about how I can protect access to the IRC servers in a similar way!

Update:

One other problem with this setup: I have a Let's Encrypt certificate for the host I intend to keep using for my (now firewalled) sites. After firewalling off my httpd, I thought that there would be problems renewing this certificate. Why? Because when I set up the certificate, I had to make a URL available on the host to prove I control it. As part of the validation process, the Let's Encrypt service accesses this URL.

I thought that now there would be a problem since that URL is no longer accessible by the Let's Encrypt service (they don't publish a list of IPs they make requests from, so I couldn't add them to my whitelist).

I did some thinking today about how I could work around this (possibly using their DNS validation method, or temporarily allowing all hosts while running a renewal). But I discovered that Let's Encrypt only needs this URL challenge when setting up your certificate initially. For renewals (and other actions) you don't have to prove your control of the host again. You already proved it. This means I don't need to have the URL accessible any more and there is actually no problem at all!

Further update:

I was incorrect. The authorization step is not a once-off as I thought. There is an expiry time on the authorization you receive from proving you control a domain. This was at one point 300 days, but they are in the process of decreasing it. This appears to be for security. To make this possible with my setup, I have been using this DNS hook. This means I use Cloudflare (hosting my DNS) to prove domain control.

Comments