Setting up my HomeServer – Part 3

I have so far written about:

and this post is about how I use Nomad to deploy software.

But before that…

I have some updates in the networking department. I had mentioned in Part 2 that, I use Nginx as a reverse proxy to route traffic from the internet into my public apps like Misskey.

By picking up the cues from Nemo’s setup, I experimented with Traefik and replace Nginx with Traefik. It was made easier by the fact that Traefik fully supports Nomad service discovery. That means, I don’t have to run something like Consul just to handle the service to Traefik Proxy mapping.

Running Traefik

I was running Nginx as a Systemd service in the OCI virtual machine and Nomad was limited to running services in my Intel NUC. While moving to Traefik, I configured the OCI VM as a Nomad client and attached it to the Nomad “Cluster”. Since Nomad is running on the Tailscale network, I didn’t have to fiddle with any of the networking/firewall stuff in the OCI VM, making the setup simple.

Traefik is run as a Docker container in the VM with the “host” network mode so that it listens to the VM ports 80/443, which are open to the outside internet. I have specifically mapped the Traefik dashboard to the “tailscale” network, allowing me to access the dashboard via SSH tunneling without have to have the 8080 port open to the rest of the world.

// Nomad configuration

    network {
      port  "http" {
        static = 80
      }
      port "https" {
        static = 443
      }

      port "admin" {
        static = 8080
        host_network = "tailscale"
      }
    }

    task "server" {
      driver = "docker"
      config {
        image = "traefik:2.9"
        ports = ["admin", "http", "https"]
        network_mode = "host"
        volumes = [
          "local/traefik.toml:/etc/traefik/traefik.toml",
          "local/ssl/cert.pem:/etc/ssl/cert.pem",
          "local/ssl/private.key:/etc/ssl/private.key",
        ]
      }
    }

Deploying Applications

All the services are written as Nomad Job specifications, with a specific network config, and service definition. Then I deploy the software from my laptop to my homeserver by running Terraform.

Ideally, I should be creating the Oracle VM using Terraform as well. But the entire journey has been a series of trail and error experiments, that I haven’t done that. I think I will migrate to a Terrform defined/created VM once I figure out how to get Nomad installed and setup without manually SSHing into the VM.

Typical Setup Workflow

Since, most of what we are dealing with are web services, I haven’t run into a situation where I had to deal with a non-docker deployment yet. But I rest easy knowing that when the day comes, I can rely on the “exec” and “raw_exec” drivers of Nomad to run them using pretty much the same workflow.

Dealing with Secrets

One of the biggest concerns about all of this is dealing with secrets like, DB credentials, API Keys, ..etc., For example, how do I supply the DB Username and Password to the Nomad Job running my application without storing them in the Job configuration files which I have on version control?

There are many ways to do it, from defining them as Terraform variables and storing it as git-ignored file withing the repo to deploying Hashicorp Vault and using the Vault – Nomad integration (which I tried and found to be an overkill).

I have chosen the simpler solution of storing them as Nomad Variables. I create them by hand using the Nomad UI, and they are defined with specific task access.

An example set of secrets

These are then injected into the service’s container as environment variables using the template block with nomadVars.

// Nomad config for getting secrets from Nomad Variables

      template {
        data = <<EOH
{{ with nomadVar "nomad/jobs/owncloud/owncloud/owncloud" }}
OWNCLOUD_DB_NAME={{.db_name}}
OWNCLOUD_DB_USERNAME={{.db_username}}
OWNCLOUD_DB_PASSWORD={{.db_password}}
OWNCLOUD_ADMIN_USERNAME={{.admin_username}}
OWNCLOUD_ADMIN_PASSWORD={{.admin_password}}
{{ end }}

{{ range nomadService "db" }}
OWNCLOUD_DB_HOST={{.Address}}
{{ end }}

{{ range nomadService "redis" }}
OWNCLOUD_REDIS_HOST={{.Address}}
{{ end }}
EOH
        env = true
        destination = "local/env"
      }

Accessing Private Services

While this entire project began with trying to self host a Misskey instance as my personal Mastodon alternate, I have started using the setup to run some private services as well – like Node RED, that runs automation like my RSS-to-ActivityPub bots.

I don’t expose these to the internet, or even the Tailscale network at the moment. They are run on my home local network on a dynamic port assigned to by Nomad. I just access it through the IP:PORT generated by Nomad.

Node-RED running at local IP and Dynamic (random) port

I will probably migrate these to the Tailscale Network, if I start traveling and would still want to have access to them. But for now, they are just restricted to my home network.

Conclusion

It has been a wonderful journey figuring all of this out over the last couple of weeks and running the home-server has been a great source of satisfaction . With Docker and Nomad, it has been really easy to try out new services, and set them up quickly.

I woke up on a Sunday and wanted to setup Pihole for blocking ads. I had the house running via Pi-hole in 30 mins. I have found a new kind freedom with this setup. I see this as a small digital balcony garden on the internet.

References

  1. Understanding Networking in Nomad by Karan Sharma
  2. Translating Docker Compose & Kubernetes to Nomad by Luiz Aoqui
  3. Nomad Past Present and Future by Luiz Aoqui

Setting up my HomeServer – Part 2

In part 1, I wrote about the hardware, operating system and the software deployment method and tools used in my home server. In this one, I am going to cover one topic that I am least experienced in – Networking

Problem: Exposing the server to the internet

At the heart of it, this the only thing in Networking I really wanted to achieve. Ideally the following is all I should have needed:

  1. Login to my router
  2. Make the IP of my Intel NUC static, so that DHCP doesn’t assign a different value every time it reboots
  3. Setup port forwarding for 80 (HTTP) and 443 (HTTPS) in the router to the NUC.
  4. Use something like DuckDNS to update my domain records to point to my public address.

I did the first 3 steps, and tried to hit my IP. Nothing. After some searching on the internet, I came to realize that my ISP doesn’t provide Public IPs for home connections anymore and my router is under some NAT (don’t know what it is).

Before I outline how everything is setup, I want to highlight 2 people and their self-hosting related blogs:

  1. Abhay Rana aka Nemo’s Setup
  2. Karan Sharma’s setup

Their blogs provided a huge amount of inspiration and a number of ideas.

Solution: Tailscale + OCI Free VM

After asking around, I settled on using Tailscale and a free VM on Oracle Cloud Infrastructure to route the traffic from the internet to the VM.

Here is how Tailscale helps:

  1. All the devices that I install Tailscale in and login becomes a part of my private network.
  2. I added my Intel NUC and the Oracle VM to the Tailscale network and added the public IP of the Oracle VM to the DNS records of my domain.
  3. Now requests to my domain go to the OCI VM, which then get forwarded to my NUC via the Tailscale network.

Some Tips

  1. Tailscale has something called MagicDNS which once turned on allows accessing the devices using their name instead of their IPs. This allows configuring things quite easy.
  2. Oracle VM’s by default have all of their Ports Blocked except for 22 (SSH). So after installing the webserver like Nginx or Apache, 2 things need to be done:
    • Add Ingress Rules allowing traffic to ports 80 and 443 for the Virtual Cloud network that the VM is configured with
    • Configure the firewall to open the ports 80 and 443 (ufw allow <port>)

I think there are many ways to forward the requests from one machine to another (OCI Instance to Homeserver in this case). I have currently settled for the most familiar one – using Nginx as Reverse Proxy.

There are 2 types of applications I plan to host:

  1. Public applications like Misskey which are going to be accessed by anyone who wants to.
  2. Private applications like Node-RED which I am going to access 99.99% from my laptop connected to my home-network.

I deploy my public applications to the Tailscale Network IP in my homer server and make them listed to a specific port. Then, in the Nginx reverse-proxy configuration on the OCI VM, I set the proxy_pass to something like http://intel-nuc:<app-port>. Since I am using Tailscale’s magic DNS, I don’t have to hard-code IP values here.

For Private applications, I simply run them on the Intel NUC’s default local network, which is my router and other things connected to it (including my laptop) and access it locally.

Up next…

Now that the connectivity is sorted out, the next part is deploying the actual application and related services. I use Nomad to do that. In the next post I will share my “architecture” and how I use Nomad and Terraform to do deployments and some tricks I learnt using them.