Skip to content

internetarchive/hind

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

HinD - Hashistack-in-Docker

+       ___                                              ·
+      /\  \                      ___                    ·
+      \ \--\       ___          /\  \        __ __      ·
+       \ \--\     /\__\         \ \--\     / __ \__\    ·
+   ___ /  \--\   / /__/     _____\ \--\   / /__\ \__\   ·
+  /\_ / /\ \__\ /  \ _\    / ______ \__\ / /__/ \ |__|  ·
+  \ \/ /_ \/__/ \/\ \ _\__ \ \__\  \/__/ \ \__\ / /__/  ·
+   \  /__/         \ \/\__\ \ \__\        \ \__/ /__/   ·
+    \ \ _\          \  /_ /  \ \__\        \ \/ /__/    ·
+     \ \__\         / /_ /    \/__/         \  /__/     ·
+      \/__/         \/__/                    \/__/      ·
+                                                        ·

install

Installs nomad, consul, and caddyserver (router) together as a mini cluster running inside a single podman container.

Nomad jobs will run as podman containers on the VM itself, orchestrated by nomad, leveraging /run/podman/podman.sock.

The brilliant consul-template will be used as "glue" between consul and caddyserver -- turning caddyserver into an always up-to-date reverse proxy router from incoming requests' Server Name Indication (SNI) to running containers :)

Setup and run

This will "bootstrap" your cluster with a private, unique NOMAD_TOKEN, and sudo podman run a new container with the hind service into the background. (source)

curl -sS https://internetarchive.github.io/hind/install.sh | sudo sh

Minimal requirements:

  • VM you can ssh into
  • VM with podman package
  • if using a firewall (like ferm, etc.) make sure the following ports are open from the VM to the world:
    • 443 - https
    • 80 - http (load balancer will auto-upgrade/redir to https) @see #VM-Administration section for more info.

https

The ideal experience is that you point a dns wildcard at the IP address of the VM running your hind system.

This allows automatically-created hostnames from CI/CD pipelines [deploy] stage to use the [git group/organization + repository name + branch name] to create a nice semantic DNS hostname for your webapps to run as and load from - and everything will "just work".

For example, *.example.com DNS wildcard pointing to the VM where hind is running, will allow https://myteam-my-repo-name-my-branch.example.com to "just work".

We use caddy (which incorporates zerossl and Let's Encrypt) to on-demand create single host https certs as service discovery from consul announces new hostnames.

build locally - if desired (not required)

This is our Dockerfile

git clone https://github.com/internetarchive/hind.git
cd hind
sudo podman build --network=host -t ghcr.io/internetarchive/hind:main .

Setting up jobs

We suggest you use the same approach mentioned in nomad repo README.md which will ultimately use a templated project.nomad file.

Nicely Working Features

We use this in multiple places for nomad clusters at archive.org. We pair it with our fully templatized project.nomad Working nicely:

  • secrets, tokens
  • persistent volumes
  • deploys with multiple public ports
  • and more -- everything here

Nomad credentials

Get your nomad access credentials so you can run nomad status anywhere that you have downloaded nomad binary (include home mac/laptop etc.)

From a shell on your VM:

export NOMAD_ADDR=https://$(hostname -f)
export NOMAD_TOKEN=$(sudo podman run --rm --secret NOMAD_TOKEN,type=env hind sh -c 'echo $NOMAD_TOKEN')

Then, nomad status should work. (Download nomad binary to VM or home dir if/as needed).

You can also open the NOMAD_ADDR (above) in a browser and enter in your NOMAD_TOKEN

You can try a trivial website job spec from the cloned repo:

# you can manually set NOMAD_VAR_BASE_DOMAIN to your wildcard DNS domain name if different from
# the domain of your NOMAD_ADDR
export NOMAD_VAR_BASE_DOMAIN=$(echo "$NOMAD_ADDR" |cut -f2- -d.)
nomad run https://internetarchive.github.io/hind/etc/hello-world.hcl

Optional ways to extend your setup

Here are a few environment variables you can pass in to your intitial install.sh run above, eg:

curl -sS https://internetarchive.github.io/hind/install.sh | sudo sh -s -- -e ON_DEMAND_TLS_ASK=...
  • -e TRUSTED_PROXIES=[CIDR IP RANGE]
    • optionally allow certain X-Forwarded-* headers, otherwise defaults to private_ranges more info
  • -e NOMAD_ADDR_EXTRA=[HOSTNAME]
    • For 1+ extra, nicer https:// hostname(s) you'd like to use to talk to nomad, pass in hostname(s) in CSV format for us to setup.
  • -e ON_DEMAND_TLS_ASK=[URL]
  • -e CERTS_SELF_SIGNED=true
    • If you want to use caddy tls internal, this will make self-signed certs with caddy making an internal Certificate Authority (CA). @see #self-signed-or-internal-ca below
  • -e ACME_DNS=true
  • -e CLIENT_ONLY_NODE=true
    • Set this if you want to setup a client only VM (ie: can run jobs/containers, but doesn't participate in leader elections & consensus protocols)
  • ...
    • other command line arguments to pass on to the main container's podman run invocation.

GUI, Monitoring, Interacting

  • see nomad repo README.md for lots of ways to work with your deploys. There you can find details on how to check a deploy's status and logs, ssh into it, customized deploys, and more.
  • You can setup an ssh tunnel thru your VM so that you can see consul in a browser, eg:
nom-tunnel () {
  [ "$NOMAD_ADDR" = "" ] && echo "Please set NOMAD_ADDR environment variable first" && return
  local HOST=$(echo "$NOMAD_ADDR" |sed 's/^https*:\/\///')
  ssh -fNA -L 8500:localhost:8500 $HOST
}

Add more Virtual Machines to make a HinD cluster

The process is very similar to when you setup your first VM. This time, you pass in the first VM's hostname (already in cluster), copy 2 secrets, and run the installer. You essentially run the shell commands below on your 2nd (or 3rd, etc.) VM.

FIRST=vm1.example.com
# copy secrets from $FIRST to this VM
ssh $FIRST 'sudo podman run --rm --secret HIND_C,type=env hind sh -c "echo -n \$HIND_C"' |sudo podman secret create HIND_C -
ssh $FIRST 'sudo podman run --rm --secret HIND_N,type=env hind sh -c "echo -n \$HIND_N"' |sudo podman secret create HIND_N -

curl -sS https://internetarchive.github.io/hind/install.sh | sudo sh -s -- -e FIRST=$FIRST

Inspiration

Docker-in-Docker (dind) and kind:

for caddyserver + consul-connect:

VM Administration

Here are a few helpful admin scripts we use at archive.org -- some might be helpful for setting up your VM(s).

Problems?

  • Older OS (eg: ubuntu focal) may not enable podman.socket. If bootstrapping fails, on linux, you can run:
sudo systemctl enable --now podman.socket
  • If the main podman run is not completing, check your podman version to see how recent it is. The nomad binary inside the setup container can segfault due to a perms change. You can either upgrade your podman version or try adding this install.sh CLI option:
--security-opt seccomp=unconfined
  • docker push repeated fails and "running out of memory" deep errors? Try:
sysctl net.core.netdev_max_backlog=30000
sysctl net.core.rmem_max=134217728
sysctl net.core.wmem_max=134217728

# to persist across reboots:
echo '
net.core.netdev_max_backlog=30000
net.core.rmem_max=134217728
net.core.wmem_max=134217728' |sudo tee /etc/sysctl.d/90-tcp-memory.conf

Miscellaneous

  • client IP addresses will be in request header 'X-Forwarded-For' (per caddy)
  • pop inside the HinD container:
sudo podman exec -it hind zsh
  • get list of consul services:
wget -qO- 'localhost:8500/v1/catalog/services?tags=1' | jq .
  • get caddy config:
wget -qO- localhost:2019/config/ | jq .

Maintenance:

  • If your podman seems to be running out of locks: see the num_locks part in install.sh and consider increasing or opening a GitHub issue
# https://docs.podman.io/en/latest/markdown/podman-system-renumber.1.html
podman -r system renumber
  • If your HinD container seems to be unable to fork processes see the --pids-limit CLI arg part in install.sh and consider increasing or opening a GitHub issue
# check HinD container's current pids limit:
cat /sys/fs/cgroup/$(podman inspect --format '{{.State.CgroupPath}}' hind)/pids.max

Self-Signed or Internal CA

  • devs just need to trust Caddy's root CA cert once (Caddy can generate it for you)
  • this is easier for internal dev environments
https://*.example.com {
    # use caddy's internal certificate authority -- no ACME challenges needed
    tls internal
    reverse_proxy ...
}

When you use Caddy tls internal, caddy automatically creates its own Certificate Authority (CA) with:

  • A root CA certificate
  • A private key for signing

This happens automatically on first run. The root CA cert is stored at:

/pv/CERTS/pki/authorities/local/root.crt
  • Devs install/trust Caddy's root CA cert one time in their browser/OS.
  • Caddy's internal CA signs certificates for *.example.com, foo.example.com, bar-branch-123.example.com, etc.
  • Browser sees these certs are signed by the already-trusted Caddy CA.
  • Zero warnings, zero clicks, zero overrides for any hostname signed by that CA.

This is exactly how Let's Encrypt works - you trust their root CA once (built into browsers), and any cert they sign "just works."

What Devs Need To Do (One Time Setup)

  1. Get the root cert from your Caddy server:
# On the Caddy VM
cat /pv/CERTS/pki/authorities/local/root.crt
  1. Devs install it in their OS/browser:
  • macOS:
sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain root.crt
  • Windows: Double-click root.crt → Install Certificate → Local Machine → Place in "Trusted Root Certification Authorities"
  • Linux (Chrome/Chromium):
cp root.crt /usr/local/share/ca-certificates/caddy-local.crt
sudo update-ca-certificates
  • Firefox: Preferences → Privacy & Security → Certificates → View Certificates → Authorities → Import
  1. Done forever
  • Every hostname shows a green padlock with zero warnings
  • Caddy signs certs on-demand for any matching hostname. devs never see warnings again.

Superior to clicking through certificate warnings, which:

  • trains devs to ignore security warnings (bad habit)
  • has to be done per hostname
  • doesn't actually work in some browsers anymore

The internal CA approach is the professional way to handle internal dev HTTPS. You give devs a Slack message with instructions; your devs install one cert.

About

Hashistack-IN-Docker (single container with nomad + consul + caddy)

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors 3

  •  
  •  
  •