+ ___ ·
+ /\ \ ___ ·
+ \ \--\ ___ /\ \ __ __ ·
+ \ \--\ /\__\ \ \--\ / __ \__\ ·
+ ___ / \--\ / /__/ _____\ \--\ / /__\ \__\ ·
+ /\_ / /\ \__\ / \ _\ / ______ \__\ / /__/ \ |__| ·
+ \ \/ /_ \/__/ \/\ \ _\__ \ \__\ \/__/ \ \__\ / /__/ ·
+ \ /__/ \ \/\__\ \ \__\ \ \__/ /__/ ·
+ \ \ _\ \ /_ / \ \__\ \ \/ /__/ ·
+ \ \__\ / /_ / \/__/ \ /__/ ·
+ \/__/ \/__/ \/__/ ·
+ ·Installs nomad, consul, and caddyserver (router) together as a mini cluster running inside a single podman container.
Nomad jobs will run as podman containers on the VM itself, orchestrated by nomad, leveraging /run/podman/podman.sock.
The brilliant consul-template will be used as "glue" between consul and caddyserver -- turning caddyserver into an always up-to-date reverse proxy router from incoming requests' Server Name Indication (SNI) to running containers :)
This will "bootstrap" your cluster with a private, unique NOMAD_TOKEN,
and sudo podman run a new container with the hind service into the background.
(source)
curl -sS https://internetarchive.github.io/hind/install.sh | sudo sh- VM you can
sshinto - VM with podman package
- if using a firewall (like
ferm, etc.) make sure the following ports are open from the VM to the world:- 443 - https
- 80 - http (load balancer will auto-upgrade/redir to https) @see #VM-Administration section for more info.
The ideal experience is that you point a dns wildcard at the IP address of the VM running your hind system.
This allows automatically-created hostnames from CI/CD pipelines [deploy] stage to use the [git group/organization + repository name + branch name] to create a nice semantic DNS hostname for your webapps to run as and load from - and everything will "just work".
For example, *.example.com DNS wildcard pointing to the VM where hind is running, will allow https://myteam-my-repo-name-my-branch.example.com to "just work".
We use caddy (which incorporates zerossl and Let's Encrypt) to on-demand create single host https certs as service discovery from consul announces new hostnames.
This is our Dockerfile
git clone https://github.com/internetarchive/hind.git
cd hind
sudo podman build --network=host -t ghcr.io/internetarchive/hind:main .We suggest you use the same approach mentioned in nomad repo README.md which will ultimately use a templated project.nomad file.
We use this in multiple places for nomad clusters at archive.org. We pair it with our fully templatized project.nomad Working nicely:
- secrets, tokens
- persistent volumes
- deploys with multiple public ports
- and more -- everything here
Get your nomad access credentials so you can run nomad status anywhere
that you have downloaded nomad binary (include home mac/laptop etc.)
From a shell on your VM:
export NOMAD_ADDR=https://$(hostname -f)
export NOMAD_TOKEN=$(sudo podman run --rm --secret NOMAD_TOKEN,type=env hind sh -c 'echo $NOMAD_TOKEN')Then, nomad status should work.
(Download nomad binary to VM or home dir if/as needed).
You can also open the NOMAD_ADDR (above) in a browser and enter in your NOMAD_TOKEN
You can try a trivial website job spec from the cloned repo:
# you can manually set NOMAD_VAR_BASE_DOMAIN to your wildcard DNS domain name if different from
# the domain of your NOMAD_ADDR
export NOMAD_VAR_BASE_DOMAIN=$(echo "$NOMAD_ADDR" |cut -f2- -d.)
nomad run https://internetarchive.github.io/hind/etc/hello-world.hclHere are a few environment variables you can pass in to your intitial install.sh run above, eg:
curl -sS https://internetarchive.github.io/hind/install.sh | sudo sh -s -- -e ON_DEMAND_TLS_ASK=...-e TRUSTED_PROXIES=[CIDR IP RANGE]- optionally allow certain
X-Forwarded-*headers, otherwise defaults toprivate_rangesmore info
- optionally allow certain
-e NOMAD_ADDR_EXTRA=[HOSTNAME]- For 1+ extra, nicer https:// hostname(s) you'd like to use to talk to nomad, pass in hostname(s) in CSV format for us to setup.
-e ON_DEMAND_TLS_ASK=[URL]- If you want to use caddy
on_demand_tls, URL to use to respond with 200/400 status codes. - @see https://caddy.community/t/11179
- If you want to use caddy
-e CERTS_SELF_SIGNED=true- If you want to use caddy
tls internal, this will make self-signed certs with caddy making an internal Certificate Authority (CA). @see #self-signed-or-internal-ca below
- If you want to use caddy
-e ACME_DNS=true- Set this if you want to use ACME DNS challenges with another server for automatic https certs @see https://caddyserver.com/docs/modules/dns.providers.acmedns
-e CLIENT_ONLY_NODE=true- Set this if you want to setup a client only VM (ie: can run jobs/containers, but doesn't participate in leader elections & consensus protocols)
...- other command line arguments to pass on to the main container's
podman runinvocation.
- other command line arguments to pass on to the main container's
- see nomad repo README.md for lots of ways to work with your deploys. There you can find details on how to check a deploy's status and logs,
sshinto it, customized deploys, and more. - You can setup an
sshtunnel thru your VM so that you can seeconsulin a browser, eg:
nom-tunnel () {
[ "$NOMAD_ADDR" = "" ] && echo "Please set NOMAD_ADDR environment variable first" && return
local HOST=$(echo "$NOMAD_ADDR" |sed 's/^https*:\/\///')
ssh -fNA -L 8500:localhost:8500 $HOST
}- Then run
nom-tunneland you can see with a browser:consulhttp://localhost:8500/
The process is very similar to when you setup your first VM. This time, you pass in the first VM's hostname (already in cluster), copy 2 secrets, and run the installer. You essentially run the shell commands below on your 2nd (or 3rd, etc.) VM.
FIRST=vm1.example.com
# copy secrets from $FIRST to this VM
ssh $FIRST 'sudo podman run --rm --secret HIND_C,type=env hind sh -c "echo -n \$HIND_C"' |sudo podman secret create HIND_C -
ssh $FIRST 'sudo podman run --rm --secret HIND_N,type=env hind sh -c "echo -n \$HIND_N"' |sudo podman secret create HIND_N -
curl -sS https://internetarchive.github.io/hind/install.sh | sudo sh -s -- -e FIRST=$FIRSTDocker-in-Docker (dind) and kind:
for caddyserver + consul-connect:
Here are a few helpful admin scripts we use at archive.org -- some might be helpful for setting up your VM(s).
- bin/ports-unblock.sh firewalls - we use
fermand here you can see how we open the minimum number of HTTP/TCP/UDP ports we need to run. - bin/setup-pv-using-nfs.sh we tend to use NFS to share a
/pv/disk across our nomad VMs (when cluster is 2+ VMs) - bin/setup-consul-dns.sh - consul dns name resolving -- but we aren't using this yet
- Older OS (eg:
ubuntufocal) may not enablepodman.socket. If bootstrapping fails, on linux, you can run:
sudo systemctl enable --now podman.socket- If the main
podman runis not completing, check yourpodmanversion to see how recent it is. Thenomadbinary inside the setup container can segfault due to a perms change. You can either upgrade your podman version or try adding thisinstall.shCLI option:
--security-opt seccomp=unconfineddocker pushrepeated fails and "running out of memory" deep errors? Try:
sysctl net.core.netdev_max_backlog=30000
sysctl net.core.rmem_max=134217728
sysctl net.core.wmem_max=134217728
# to persist across reboots:
echo '
net.core.netdev_max_backlog=30000
net.core.rmem_max=134217728
net.core.wmem_max=134217728' |sudo tee /etc/sysctl.d/90-tcp-memory.conf- client IP addresses will be in request header 'X-Forwarded-For' (per
caddy) - pop inside the HinD container:
sudo podman exec -it hind zsh- get list of
consulservices:
wget -qO- 'localhost:8500/v1/catalog/services?tags=1' | jq .- get
caddyconfig:
wget -qO- localhost:2019/config/ | jq .- If your podman seems to be running out of locks:
see the
num_lockspart in install.sh and consider increasing or opening a GitHub issue
# https://docs.podman.io/en/latest/markdown/podman-system-renumber.1.html
podman -r system renumber- If your HinD container seems to be unable to fork processes
see the
--pids-limitCLI arg part in install.sh and consider increasing or opening a GitHub issue
# check HinD container's current pids limit:
cat /sys/fs/cgroup/$(podman inspect --format '{{.State.CgroupPath}}' hind)/pids.max- devs just need to trust Caddy's root CA cert once (Caddy can generate it for you)
- this is easier for internal dev environments
https://*.example.com {
# use caddy's internal certificate authority -- no ACME challenges needed
tls internal
reverse_proxy ...
}When you use Caddy tls internal,
caddy automatically creates its own Certificate Authority (CA) with:
- A root CA certificate
- A private key for signing
This happens automatically on first run. The root CA cert is stored at:
/pv/CERTS/pki/authorities/local/root.crt- Devs install/trust Caddy's root CA cert one time in their browser/OS.
- Caddy's internal CA signs certificates for *.example.com, foo.example.com, bar-branch-123.example.com, etc.
- Browser sees these certs are signed by the already-trusted Caddy CA.
- Zero warnings, zero clicks, zero overrides for any hostname signed by that CA.
This is exactly how Let's Encrypt works - you trust their root CA once (built into browsers), and any cert they sign "just works."
- Get the root cert from your Caddy server:
# On the Caddy VM
cat /pv/CERTS/pki/authorities/local/root.crt- Devs install it in their OS/browser:
macOS:
sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain root.crtWindows: Double-clickroot.crt→ Install Certificate → Local Machine → Place in "Trusted Root Certification Authorities"Linux(Chrome/Chromium):
cp root.crt /usr/local/share/ca-certificates/caddy-local.crt
sudo update-ca-certificatesFirefox: Preferences → Privacy & Security → Certificates → View Certificates → Authorities → Import
- Done forever
- Every hostname shows a green padlock with zero warnings
- Caddy signs certs on-demand for any matching hostname. devs never see warnings again.
Superior to clicking through certificate warnings, which:
- trains devs to ignore security warnings (bad habit)
- has to be done per hostname
- doesn't actually work in some browsers anymore
The internal CA approach is the professional way to handle internal dev HTTPS. You give devs a Slack message with instructions; your devs install one cert.
