It All Started with a Power Surge
Everything was humming along smoothly until one day earlier this year when we had a big, dramatic, Texas thunderstorm. Lightning caused a transformer to explode and the resulting power surge took out my old x86 server, a raspberry pi that I was using for games with retropie, a big hard drive full of data, an IPS monitor and the AV receiver connected to my PC (It also took out all the outlets in the garage but that’s another story).
“Needless to say, I was despondent about the meltdown. In the midst of my preparations for hari-kari… it came to me.” — Real Genius
After arriving at the Acceptance stage of grief, I started to dream up a newer, better, cheaper replacement. Before long a package arrived with my new server: a raspberry pi5, a Pironman 5 case and a 1TB NVME SSD.
Meet GIR
Named for Invader Zim’s insane robot sidekick (who often disguised himself in a zip-up, green dog costume), GIR proved to be a powerful replacement. Built around the concept of memento mori, everything is managed, backed up twice and stored in git. It has become a sort of Docker mothership and glue layer for my homelab and one of my favorite machines.
© Nickelodeon / Jhonen Vasquez
The Hardware
- Raspberry Pi 5 — These little hobbyist boards are relatively cheap and powerful and make a decent linux box.
- 1TB NVMe SSD — NVMe SSDs are fast and fairly inexpensive these days
- Pironman5 case — This case has it all and then some.
- External USB drive — I have a 10tb drive for file sharing but it strains the I/O on this board so I’ll probably relocate it soon.
The Pironman5 Case
The Pironman 5 case is a superb alternative to the typical plastic cases available for the pi. While there is definitely some assembly required, it provides all the features I needed in an attractive case.
- Active cooling (a beefy heatsink covers the main chips while a trio of fans ensure that your Pi stays nice and cool)
- OLED screen (features a tiny, programmable oled screen that by default shows ip addresses, cpu and memory and temps right on the case)
- NVMe support (support for NVMe drives is somewhat rare so this was a huge feature)
- Looks cool (the “Saturday morning cartoon villain HQ” vibe)
If you’re in the market for a cool case, you should check out the reviews, like this one from tom’s HARDWARE
🐋 The Docker Stack
Most of GIR’s magic comes from Docker. Each service lives in its own container — easy to maintain, easy to update, no dependency tangles.
Here’s how I keep things organized: grouped by what they do, with one‑line descriptions. If you want the nitty‑gritty details, expand the sections for notes, tricky bits, and docker compose blocks. For a casual read, there is no need to expand every one unless you want to play Sim City with containers like I have.
🛠 System Tools
Pi-hole – DNS server and ad-blocker for the whole network.
Docker Compose: 🔒 Note:More about Pi-hole
/etc/dnsmasq.d
and /etc/pihole
for persistent config.version: '3.3'
services:
pihole:
container_name: pihole
image: pihole/pihole:latest
hostname: pihole
restart: always
ports:
- "53:53/tcp"
- "53:53/udp"
- "8080:80/tcp"
- "8443:443/tcp"
environment:
TZ: "America/Chicago"
WEBPASSWORD: "${WEBPASSWORD}" # stored in a .env file
DHCP_ROUTER: "192.168.50.1" # example router IP
VIRTUAL_HOST: "pihole.example.lan" # safe placeholder domain
PIHOLE_DNS_: "192.168.50.1;1.1.1.1" # internal + public resolver
volumes:
- /var/docker/pihole/etc-pihole:/etc/pihole
- /var/docker/pihole/etc-dnsmasq.d:/etc/dnsmasq.d
networks:
pihole_macvlan_network:
ipv4_address: 192.168.50.5 # example Pi-hole IP
networks:
pihole_macvlan_network:
external: true
.env
file (holding WEBPASSWORD
) should never be committed to git.
Nginx Proxy Manager (NPM) – Handles all my proxy rules and SSL.
Docker compose / Portainer stack definition:More about NPM
services:
npm:
image: jc21/nginx-proxy-manager:latest
container_name: nginx-proxy-manager
restart: always
ports:
- '80:80'
- '81:81' # UI (for managing proxy rules)
- '443:443'
volumes:
- /var/docker/npm/data:/data
- /var/docker/npm/letsencrypt:/etc/letsencrypt
Portainer – Web GUI to manage every container.
Docker Compose: 💡 Note: Portainer needs access to More about Portainer
/var/run/docker.sock
.version: '3.8'
services:
portainer:
image: portainer/portainer-ce:latest
container_name: portainer
restart: always
ports:
- "8000:8000"
- "9000:9000" # Web UI (HTTP)
- "9443:9443" # Web UI (HTTPS)
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /var/docker/portainer:/data
networks:
- npm_default # So NPM can proxy Portainer too
networks:
npm_default:
external: true
/var/run/docker.sock
to manage Docker (that’s normal — but it’s essentially root‑level access).
Pi.Alert – Network presence monitor.
Docker Compose: 🔒 Notes:More about Pi.Alert
version: '3.3'
services:
pialert:
image: jokobsk/pi.alert:latest
container_name: pialert
network_mode: host # Needed for full network visibility
restart: unless-stopped
volumes:
- /var/docker/pialert/config:/home/pi/pialert/config
- /var/docker/pialert/db:/home/pi/pialert/db
cap_add:
- NET_ADMIN
- NET_RAW
environment:
- TZ=America/Chicago
network_mode: host
is required so Pi.Alert can see the whole LAN (but means it’s tightly bound to the host network — don’t expose this container externally).NET_ADMIN
and NET_RAW
capabilities to sniff devices — normal for this app, but be mindful of what else you mount in here.
SyncThing
SyncThing – Syncs files between all my machines.
Docker Compose: 🔒 Notes:More about Syncthing
version: "3.8"
services:
syncthing:
image: syncthing/syncthing:latest
container_name: syncthing
restart: unless-stopped
hostname: gir
networks:
- npm_default
volumes:
- /var/docker/syncthing/config:/var/syncthing/config
- /var/docker/syncthing/data:/var/syncthing/data
ports:
- "8384:8384" # Web UI
- "22000:22000/tcp"
- "22000:22000/udp"
- "21027:21027/udp" # Local discovery
networks:
npm_default:
external: true
8384
is the Syncthing Web UI — fine for LAN, but best proxied through Nginx Proxy Manager if you want remote access with HTTPS/auth.npm_default
so it plays nicely with your proxy rules./var/docker/syncthing/config
).
Statix – A super-lightweight web server for static files.
Docker Compose: 🔒 Notes:More about Statix
/var/www/html
and share” moments.version: '3.8'
services:
statix:
image: nginx:alpine
container_name: statix
ports:
- "8081:80"
volumes:
- /var/www/html:/usr/share/nginx/html:ro
- /var/docker/statix/nginx.conf:/etc/nginx/conf.d/default.conf:ro
restart: unless-stopped
networks:
- npm_default
networks:
npm_default:
external: true
:ro
(read‑only) on the mounts means nginx can’t overwrite your files or configs — safer and cleaner.npm_default
so it can be proxied via Nginx Proxy Manager if needed.
© Nickelodeon / Jhonen Vasquez
🏠 Dashboards & Home Pages
Dashy – My main homelab dashboard and jumping-off point.
Docker Compose: 🔒 Notes:More about Dashy
version: '3.8'
services:
dashy:
container_name: dashy
image: lissy93/dashy:latest
ports:
- "8090:8080"
volumes:
- /var/docker/dashy/config.yml:/app/user-data/conf.yml
- ./my-nginx.conf:/etc/nginx/conf.d/default.conf:ro
restart: always
networks:
- npm_default
networks:
npm_default:
external: true
config.yml
holds all the dashboard items and widgets — you can track that file in Git for easy rollbacks.my-nginx.conf
) gives you full control over how Dashy serves content.npm_default
so Nginx Proxy Manager can front it with HTTPS/auth.
Heimdall – A simple, lightweight launcher for quick access.
Docker Compose: 🔒 Notes:More about Heimdall
services:
heimdall:
container_name: heimdall
image: lscr.io/linuxserver/heimdall:latest
restart: always
environment:
- PUID=1000
- PGID=1000
- TZ=America/Chicago
volumes:
- /var/docker/heimdall:/config
ports:
- "8082:80"
PUID
/PGID
(1000) match your main user, so Heimdall writes config files with the right ownership.8082
— easy to proxy through Nginx Proxy Manager later.
🎛 Monitoring & Metrics
Netdata – Real-time performance monitoring for GIR.
Docker Compose: 🔒 Notes:More about Netdata
version: "3.8"
services:
netdata:
image: netdata/netdata:latest
container_name: netdata
ports:
- "19999:19999"
cap_add:
- SYS_PTRACE
security_opt:
- apparmor:unconfined
volumes:
- netdata_config:/etc/netdata
- /proc:/host/proc:ro
- /sys:/host/sys:ro
- /etc/os-release:/host/etc/os-release:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
networks:
- default
- npm_default
environment:
- DO_NOT_TRACK=1
mem_limit: 256m
cpus: 0.4 # limit to 40% of one core
restart: unless-stopped
volumes:
netdata_config:
networks:
npm_default:
external: true
/proc
and /sys
— Netdata can see stats without modifying the host.mem_limit
and cpus
keep it lightweight on the Pi, so GIR doesn’t feel sluggish.
Uptime Kuma – Simple, slick uptime monitoring.
Docker Compose: 🔒 Notes:More about Uptime Kuma
version: '3.8'
services:
uptime-kuma:
container_name: uptime-kuma
image: louislam/uptime-kuma:latest
restart: always
ports:
- "3001:3001"
volumes:
- /var/docker/uptime-kuma:/app/data
environment:
- TZ=America/Chicago
networks:
- npm_network
networks:
npm_network:
driver: bridge
/var/docker/uptime-kuma
— easy to back up.3001
by default; you can proxy it through Nginx Proxy Manager for HTTPS and remote access.npm_network
, but can be attached to your main npm_default
if you want one shared network.
Speedtest Tracker – Logs internet speed over time.
Docker Compose: 🔒 Notes:More about Speedtest Tracker
version: "3.3"
services:
speedtest-tracker:
container_name: speedtest-tracker
image: ghcr.io/alexjustesen/speedtest-tracker:latest
restart: unless-stopped
ports:
- "8765:80"
environment:
- PUID=1000
- PGID=1000
- TZ=America/Chicago
- DB_CONNECTION=sqlite
- DB_DATABASE=/config/database/database.sqlite # key setting for SQLite mode
volumes:
- /var/docker/speedtest-tracker/config:/config
- /var/docker/speedtest-tracker/web:/etc/ssl/web
/config
, easy to back up./etc/ssl/web
) can be used if you want to add HTTPS later.8765
by default — proxy through Nginx Proxy Manager for remote access or SSL.
© Nickelodeon / Jhonen Vasquez
🧰 Developer Tools
Gitea – My self-hosted Git server and code hub.
Docker Compose: 🔒 Notes:More about Gitea
version: "3.8"
services:
gitea:
image: gitea/gitea:latest
container_name: gitea
restart: unless-stopped
environment:
- USER_UID=1000
- USER_GID=1000
- GITEA__server__DOMAIN=git.example.lan
- GITEA__server__ROOT_URL=https://git.example.lan/
- GITEA__server__SSH_DOMAIN=git.example.lan
- GITEA__server__SSH_PORT=222
volumes:
- /var/docker/gitea:/data
ports:
- "3003:3000" # Web interface
- "222:22" # SSH for git
networks:
- npm_default
networks:
npm_default:
external: true
/var/docker/gitea
— easy to back up.3003
for web UI, 222
for Git over SSH.npm_default
so Gitea sits neatly behind Nginx Proxy Manager for HTTPS.git.example.lan
) used here — swap for your own LAN/SSL domain.
Open-WebUI – A web front end for local LLM experiments.
Docker Compose: 🔒 Notes:More about Open-WebUI
version: "3.8"
services:
open-webui:
image: ghcr.io/open-webui/open-webui:main
container_name: open-webui
ports:
- "3000:8080"
volumes:
- open-webui-data:/app/backend/data
extra_hosts:
- "host.docker.internal:host-gateway"
restart: always
volumes:
open-webui-data:
open-webui-data
named volume — easy to back up.3000
— proxy through Nginx Proxy Manager for HTTPS and remote access.host.docker.internal
to connect to the host environment (helpful if models or tools live outside this container).
Jupyter – Interactive notebooks for Python tinkering and data work.
Docker Compose: 🔑 Set a password hash: Copy the hash (looks like sha1:123abc…) and paste it into
–NotebookApp.password=‘sha1:YOUR-HASH’ in the compose file. 🔒 Notes: NotebookApp.token=’’ disables the annoying one-time token login. Still password protected, so anyone on your LAN will need that password to log in. Stores notebooks in /var/docker/jupyter/notebooks for persistence and easy backups.More about Jupyter
version: '3.8'
services:
jupyter:
image: jupyter/scipy-notebook:latest
container_name: jupyter
ports:
- "8888:8888"
volumes:
- /var/docker/jupyter/notebooks:/home/jovyan/work
command: start-notebook.sh --NotebookApp.token='' --NotebookApp.password='sha1:PUT-YOUR-HASH-HERE'
restart: unless-stopped
Run this once on GIR (or any machine with Python):docker run --rm jupyter/scipy-notebook:latest \
python -c "from notebook.auth import passwd; print(passwd())"
📺 Media & Fun (Occasional Use)
Jellyfin – A self-hosted media server for movies, shows, and music.
Docker Compose: 🔒 Notes:More about Jellyfin
version: "3.8"
services:
jellyfin:
image: jellyfin/jellyfin:latest
container_name: jellyfin
ports:
- "8096:8096"
volumes:
- /var/docker/jellyfin/config:/config
- /var/docker/jellyfin/cache:/cache
- /mnt/sldf/VIDEO:/media
environment:
- TZ=America/Chicago
- PUID=1000
- PGID=1000
restart: unless-stopped
networks:
- npm_default
networks:
npm_default:
external: true
/var/docker/jellyfin/config
, and cache goes to /var/docker/jellyfin/cache
for smoother playback./mnt/sldf/VIDEO
— swap in your own mount point if you use this snippet.8096
; proxy it through Nginx Proxy Manager if you want HTTPS or external access.
PhotoPrism – A private photo library and indexing tool.
Docker Compose: 🔒 Notes:More about PhotoPrism
version: '3.7'
services:
photoprism:
image: photoprism/photoprism:latest
container_name: photoprism
restart: unless-stopped
ports:
- "2342:2342"
environment:
PHOTOPRISM_ADMIN_USER: "admin"
PHOTOPRISM_ADMIN_PASSWORD: "changeme" # ❗ replace in .env for real setup
PHOTOPRISM_ORIGINALS_LIMIT: 5000
PHOTOPRISM_HTTP_COMPRESSION: "gzip"
PHOTOPRISM_LOG_LEVEL: "info"
PHOTOPRISM_DISABLE_TLS: "true"
PHOTOPRISM_SITE_TITLE: "Photo Archive"
PHOTOPRISM_UPLOAD_NSFW: "true"
volumes:
- /var/docker/photoprism/storage:/photoprism/storage
- /mnt/sldf/ImageLibrary:/photoprism/originals
depends_on:
- mariadb
mariadb:
image: mariadb:10.11
container_name: photoprism-db
restart: unless-stopped
environment:
MYSQL_ROOT_PASSWORD: "rootpass" # ❗ move to .env
MYSQL_DATABASE: "photoprism"
MYSQL_USER: "photoprism"
MYSQL_PASSWORD: "secret" # ❗ move to .env
volumes:
- /var/docker/photoprism/db:/var/lib/mysql
.env
file and reference them instead.PHOTOPRISM_DISABLE_TLS=true
) because HTTPS is handled by Nginx Proxy Manager./mnt/sldf/ImageLibrary
) and PhotoPrism storage directory are separate for sanity and backups.
Qbittorrent + GlueTun – Private torrenting via VPN.
Docker Compose: 🔒 Notes:More about Qbittorrent + GlueTun
version: '3.3'
services:
gluetun:
image: qmcgaw/gluetun
container_name: qbittorrent-vpn
cap_add:
- NET_ADMIN
devices:
- /dev/net/tun
volumes:
- /var/docker/bittorrent/config/openvpn:/gluetun
environment:
- VPN_SERVICE_PROVIDER=custom
- VPN_TYPE=openvpn
- DOT=off
- DNS=192.168.1.1
- OPENVPN_CUSTOM_CONFIG=/gluetun/airvpn.ovpn
- TZ=America/Chicago
- FIREWALL_VPN_INPUT_PORTS=22112
ports:
- 8080:8080 # qBittorrent Web UI
- 6881:6881 # BitTorrent TCP
- 6881:6881/udp # BitTorrent UDP
- 22112:22112 # AirVPN port (TCP)
- 22112:22112/udp # Optional UDP trackers
restart: unless-stopped
qbittorrent:
image: linuxserver/qbittorrent
container_name: qbittorrent
depends_on:
- gluetun
network_mode: "service:gluetun"
environment:
- PUID=1000
- PGID=1000
- WEBUI_PORT=8080
volumes:
- /var/docker/bittorrent/config:/config
- /mnt/sldf/downloads:/downloads
- /mnt/sldf/downloads/watch:/watch
mem_limit: 512m # RAM cap
cpus: 0.75 # CPU limit (75% of one core)
restart: unless-stopped
.ovpn
file and any login credentials in /var/docker/bittorrent/config/openvpn
— don’t hardcode them in the compose file.network_mode: service:gluetun
forces all qBittorrent traffic through GlueTun — if VPN drops, qBittorrent is cut off.8080
is the Web UI (LAN only), 6881
handles BitTorrent, and 22112
is your forwarded VPN port.
© Nickelodeon / Jhonen Vasquez
zram
for efficient swap
One tweak that has paid dividends has been to create a zram swap with a lower priority swap file behind it. This actually compresses the swapfile in ram, sometimes even doubling the amount of data that can be stored in memory. It falls back to the ssd only when RAM and the zram swap are completely full. Pretty easy to set up:
🌀 ZRAM + Fallback Swapfile (Quick Setup)
# 1️⃣ Install and enable ZRAM (2 GB compressed swap)
sudo apt install zram-tools
echo -e "SIZE=2048\nALGO=zstd\nPRIORITY=100" | sudo tee /etc/default/zramswap
sudo systemctl enable --now zramswap
# 2️⃣ Add a 16 GB fallback swapfile on NVMe
sudo fallocate -l 16G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon --priority 10 /swapfile
echo '/swapfile none swap sw,pri=10 0 0' | sudo tee -a /etc/fstab
# 3️⃣ Verify
swapon --show
That’s it! Efficient swap file management on a memory constrained device.
Headless but with a Virtual Head Anyway
This was an exercise in masochism so I’m documenting my working setup here. I hope it helps someone else avoid the confusion and frustration I encountered.
These are the exact steps used to set up GIR’s “virtual head” — a full XFCE desktop available over VNC.
1️⃣ Install packages 2️⃣ Create Contents: Make executable: 3️⃣ Set a VNC password 4️⃣ Create & enable systemd service Paste in: Enable and start: 5️⃣ Connect from a VNC client to ✅ Result: A persistent XFCE desktop on your Pi, ready whenever you connect.Detailed setup steps
Quick Start (LAN VNC with XFCE)
sudo apt update
sudo apt install tigervnc-standalone-server xfce4 xfce4-goodies
~/.vnc/xstartup
mkdir -p ~/.vnc
nano ~/.vnc/xstartup
#!/bin/sh
unset SESSION_MANAGER
unset DBUS_SESSION_BUS_ADDRESS
export XKL_XMODMAP_DISABLE=1
export DISPLAY=:1
xrdb $HOME/.Xresources
exec startxfce4
chmod +x ~/.vnc/xstartup
vncpasswd
sudo nano /etc/systemd/system/vncserver@.service
[Unit]
Description=Start TigerVNC server at startup
After=syslog.target network.target
[Service]
Type=forking
User=grumble
Group=grumble
WorkingDirectory=/home/grumble
PIDFile=/home/grumble/.vnc/%H:%i.pid
ExecStartPre=-/usr/bin/tigervncserver -kill :%i > /dev/null 2>&1
ExecStart=/usr/bin/tigervncserver :%i -localhost no -geometry 1280x800 -depth 24
ExecStop=/usr/bin/tigervncserver -kill :%i
[Install]
WantedBy=multi-user.target
sudo systemctl enable vncserver@1.service
sudo systemctl start vncserver@1.service
raspberrypi:1
(or the Pi’s LAN IP with :1
).
🔒 Notes & Tips
-localhost no
means anyone on your LAN can connect. For SSH tunnel–only access, change it to -localhost yes
.~/.vnc/*.log
if the service doesn’t start.vncpasswd
and stored in ~/.vnc/passwd
.
SSH & Dotfiles Integration
I won’t go into all the gory details, but one of the biggest reasons GIR “just works” is the way SSH and Git are woven into everything.
All my machines — laptops, desktops, servers — share one dotfiles repo. That repo controls my shell prompt, aliases, functions, and toolchain. No matter which machine I’m on, I feel like I’m “at home.”
The other half of the puzzle is SSH. I keep a single, clean ~/.ssh/config
(also tracked in Git) that knows how to reach every machine, and every machine has the right public keys installed so I can hop around without typing passwords.
Here’s an example (sanitized) of how I keep things neat:
# dotfiles/ssh/EXAMPLE.config
# Example SSH config — customize as needed and then rename from EXAMPLE.config to config
Host myserver
HostName example.com
User myuser
IdentityFile ~/.ssh/id_ed25519
Host github.com
HostName git.example.com
User git
IdentityFile ~/.ssh/id_github_ed25519
IdentitiesOnly yes
How I Use GIR
So hopefully this post will not be as tedious to read as it was to write. I did put a lot of work into crafting this system, but how do I actually use it?
I keep some of the Docker services powered off normally and only activate them as needed. Others are always up and heavily used (like Gitea). Having my infrastructure services hosted in one place is a big win. The key to keeping it sane lies in how I use pihole and npm. For each service, I create an A record for servicename.darkstar.home that points to GIR’s IP address. I then create an Nginx Proxy Manager host rule that maps the friendly domain name to the service (whether on Docker or standalone). This keeps my namespace memorable and easy to link and avoids having to remember which port numbers of paths I used.
On each of my machines I have a folder of bookmarks to my services. With Dashy however, I have built a homepage that links to everything I have so just that link is enough to browse and reach the others.
It’s clean, friendly and works very well.
The final piece of the puzzle are scheduled backups.
Backups all the way down
GIR might be a little chaos gremlin, but I keep it on a short leash. Every night a systemd timer quietly kicks off a backup script that grabs all my Docker volumes, compresses them, and sweeps away the cruft.
Here’s the core script (lives at A systemd service runs the script: And the systemd timer makes sure it happens every night: Enable it once, and it just hums away: Result: nightly backups, no drama, and only the three most recent snapshots kept — clean and lean, just the way I like it.Show backup script & systemd setup
/usr/local/bin/docker-backup.sh
):#!/bin/bash
set -euo pipefail
BACKUP_DIR="/mnt/sldf/backup"
TIMESTAMP=$(date +"%Y-%m-%d_%H-%M-%S")
BACKUP_FILE="$BACKUP_DIR/docker_backup_$TIMESTAMP.tar.gz"
SOURCE_DIR="/var/docker"
# Create the backup (exclude the destination path)
tar -czf "$BACKUP_FILE" \
--exclude="$BACKUP_FILE" \
--exclude="$BACKUP_DIR" \
"$SOURCE_DIR"
# Keep only the 3 most recent backups, delete older ones
ls -t "$BACKUP_DIR"/docker_backup_*.tar.gz | tail -n +4 | xargs -r rm -f
# Output completion message and notify Uptime Kuma
echo "Backup completed: $BACKUP_FILE"
curl -fsS --retry 3 \
"http://status.darkstar.home/api/push/hm3iRuOyHT?status=up&msg=OK"
# /etc/systemd/system/docker-backup.service
[Unit]
Description=Nightly Docker Backup
[Service]
Type=oneshot
ExecStart=/usr/local/bin/docker-backup.sh
# /etc/systemd/system/docker-backup.timer
[Unit]
Description=Run Docker Backup Every Night
[Timer]
OnCalendar=*-*-* 03:00:00
Persistent=true
[Install]
WantedBy=timers.target
sudo systemctl enable --now docker-backup.timer
✅ Conclusion
GIR started life as a recovery project after my old server’s meltdown, but it’s turned into something better. The infrastructure it manages has been very reliable and despite its complexity, I know I have a robust server that turns my collection of machines and VMs into a cohesive rig. While I am careful not to break what works, I am not afraid to experiment and tweak it as needed.
That’s it. I definitely battled a lot of little problems so I hope this will help others avoid the same frustrations.
📬 Got a server with a weird name, a backup ritual, or a homelab hack you love?
Tell me about it: feedback@adminjitsu.com.
© Nickelodeon / Jhonen Vasquez