Intro
Docker has become one of those critical technologies powering everything from scrappy homelabs to enterprise infrastructure. It can feel intimidating at first—so many new terms, layers, and tools—but armed with a good cheatsheet and the right explanations, Docker turns out to be surprisingly fun and almost magically powerful.
With it, you can host an entire network of services on your own machine: media servers like Jellyfin and Immich, full LAMP stacks for web development, databases, your own git server or even your own custom scripts packaged and deployed cleanly.
In this guide we’ll compile commands, concepts, official docs, and practical examples into one handy resource. Because Docker is less confusing than it seems—and with a few best practices, it can become one of the most reliable and versatile parts of your network.
What is Docker?
At its core, Docker is a way to run little computers inside your computer.
Each Docker container is its own lightweight environment, with just enough OS, libraries, and binaries to do one job well—whether that’s serving a web app, crunching data, or running a database. Instead of cluttering your base OS with packages and configs, you pull a prebuilt image and run it in isolation.
Technically speaking, Docker builds on core Linux kernel features like namespaces, cgroups, and union (overlay) filesystems to provide process isolation and resource management. Unlike traditional virtualization—which uses a hypervisor to run full guest operating systems on abstracted hardware—Docker employs OS-level virtualization (containerization), sharing the host kernel across containers and making it far more lightweight and efficient.
A bit of history:
- Docker was created in 2013 by Solomon Hykes and the team at dotCloud (a PaaS startup), and has since evolved into a massive open-source ecosystem.
- It’s primarily written in Go, with CLI tooling and APIs that work across Linux, Windows, and macOS.
- In just over a decade, it’s become a near-ubiquitous layer in modern development and operations: used in CI/CD pipelines, cloud deployments, and, increasingly, personal projects and homelabs.
Why is it so popular? A few key benefits:
- Isolation without the overhead of full VMs.
- Consistency across environments: the same image runs on your laptop, server, or in the cloud.
- Portability and sharing: Docker Hub and registries make distributing software trivial.
- Resource efficiency: containers spin up in seconds and can be packed densely on a single host.
- Flexibility: from one-off experiments to long-lived production services.
Today, Docker shows up everywhere—from billion-dollar cloud platforms to basement labs. On my own server gir.darkstar.home
, I run over twenty containers: Jellyfin, Portainer, Dashy, Uptime Kuma, Gitea, Jupyter Labs and more. It keeps everything clean and manageable and easy to backup. I kind of love Docker!
In the sections ahead, we’ll start with a practical cheatsheet of Docker commands, then build toward hands-on tutorials: installing Portainer, containerizing a script, updating a service without losing data, and even migrating containers to a new host.
Usage: The Docker CLI Cheatsheet

Even if you lean on Docker Desktop, Portainer, or other GUIs, the command line is always there.
It’s the universal interface: every container operation can be expressed at the CLI,
whether you’re hacking on a laptop, administering a headless server, or scripting a CI/CD pipeline.
If you can run it from the CLI, you can automate it, debug it, or wrap it in a script later.
This cheatsheet is organized into task-oriented sections, each with:
- A quick explanation of why the task matters
- A collapsible block of relevant commands
- Footnotes, Usage caveats and more
Commands may appear in more than one section if they’re useful for that workflow.
By the end, you’ll have a mental map of what to reach for, whether you’re checking logs, moving files,
or pruning old containers that are eating up disk space.
Checking What’s Running
The first step in container management is knowing what’s happening right now.
Which containers are up, which have stopped, and how much load they’re putting on your system?
This section gives you the heartbeat of your Docker host — status, ports, resource usage, and quick inspections.
Notes:Cheatsheet: Checking What’s Running
# List running containers (default view)
docker ps
# Show ALL containers (running + stopped)
docker ps -a
# Quiet mode (IDs only) — handy for scripting
docker ps -q
# Filter example: show only stopped/exited containers
docker ps -f "status=exited"
# Custom table view: names, status, ports
docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"
# Live resource usage (CPU, MEM, I/O, PIDs)
docker stats
# One-shot resource snapshot (no stream)
docker stats --no-stream
# Show port mappings for a container
docker port <container_name>
# Show processes running inside a container
docker top <container_name>
# Inspect container details (JSON: state, mounts, networks)
docker inspect <container_name>
# Quick: container IP address
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <container_name>
# Quick: restart count (useful for crash loops)
docker inspect -f '{{.RestartCount}}' <container_name>
docker ps
shows only running containers; add -a
to include stopped/exited ones.--format
for clean custom output; in PowerShell, wrap the template in single quotes.docker stats
streams by default — use --no-stream
for scripts or snapshots.docker port
only shows published (host-exposed) ports; for internal-only, check the container network.docker top
is a quick sanity check for runaway processes without attaching a shell.docker system df
), see the Cleanup & Maintenance section.
Starting, Stopping, and Restarting
Once you’ve identified a container, the next step is controlling its lifecycle.
Containers aren’t meant to be long-lived processes in the same sense as system daemons; they’re designed to start fast, stop cleanly, and be replaced when needed.
Knowing how to start, stop, restart, and remove containers lets you manage your stack gracefully instead of reaching for host-level reboots or kill -9
.
💡 Tip: If you don’t know the container’s name or ID, list them first: Notes:Cheatsheet: Starting, Stopping, and Restarting
docker ps -a --format "table {{.Names}}\t{{.ID}}\t{{.Status}}"
# Start a stopped container
docker start <container_name>
# Stop a running container (graceful SIGTERM, then SIGKILL after timeout)
docker stop <container_name>
# Force-stop immediately (no grace period)
docker kill <container_name>
# Restart a container (stop + start)
docker restart <container_name>
# Pause/resume all processes in a container
docker pause <container_name>
docker unpause <container_name>
# Remove (delete) a container entirely
docker rm <container_name>
# Remove a container that’s still running (force)
docker rm -f <container_name>
# Automatically remove a container after it exits (good for throwaway/test runs)
docker run --rm <image>
# Start a new container in the background (detached mode)
docker run -d --name <name> <image>
# Start a new container and attach to it interactively
docker run -it --name <name> <image> /bin/bash
docker ps -a
shows both running and stopped containers — use this before lifecycle actions.docker stop
tries a graceful shutdown (default 10s timeout) before sending SIGKILL
.docker kill
if you need immediate termination, but it skips cleanup inside the container.docker rm
deletes only the container metadata; images and volumes remain until explicitly removed.docker rm -f
is a shortcut for “stop + remove” but can interrupt clean shutdowns.docker pause
is niche but useful when you want to freeze a container’s CPU/memory usage without stopping it.--rm
is great for temporary containers (debug shells, quick tests) since they clean up after themselves.docker run -d
with a name and mapped ports so it persists and can be restarted later.
Watching What Containers Are Doing
Once a container is running, you’ll often need to peek inside.
Maybe it’s to check logs for errors, monitor activity in real time, or open a shell to troubleshoot directly.
Think of this as your observation toolkit — the ways to see what’s happening under the hood without tearing the container apart.
💡 Tip: First, list container names/IDs so you know what to target: Notes:Cheatsheet: Watching What Containers Are Doing
docker ps --format "table {{.Names}}\t{{.ID}}\t{{.Status}}"
# === Logs ===
# Follow logs in real time (like `tail -f`)
docker logs -f <container_name>
# Show the last 50 log lines
docker logs --tail 50 <container_name>
# Show logs since a specific timestamp (ISO 8601 or "1h" for last hour)
docker logs --since 1h <container_name>
# === Interactive Access ===
# Open a shell inside a running container (bash if available)
docker exec -it <container_name> /bin/bash
# Fallback: sh is more universally present
docker exec -it <container_name> sh
# Run a one-off command inside a container
docker exec -it <container_name> <command>
# === Resource Monitoring ===
# Live resource usage (CPU, memory, network, disk I/O) for one container
docker stats <container_name>
# Show resource usage for all running containers
docker stats
# === Process Inspection ===
# Show processes running inside a container
docker exec <container_name> ps aux
# === File System Inspection ===
# Browse container filesystem from outside
docker exec <container_name> ls -la /app
# Search for config files inside container
docker exec <container_name> find /etc -name "*.conf"
# === Attach to Main Process ===
# Attach directly to the container’s main process (stdout/stderr)
docker attach <container_name>
# Detach safely with CTRL-p + CTRL-q (not CTRL-c!)
# CTRL-c will usually stop the container
docker logs
only shows what the container’s main process writes to stdout/stderr. For apps writing to files, use docker exec
.--tail
and --since
to make logs manageable for long-running services.docker exec
is the safest way to explore — it runs a new process inside without disturbing the main one.docker stats
is great for spotting runaway containers eating CPU or memory. Add --no-stream
for a single snapshot.docker top <container>
also works, but docker exec ... ps aux
gives more familiar detail.ls
, find
) via docker exec
is invaluable for checking mounts, configs, or missing files.docker attach
ties you to the main process — only use if you know what you’re doing, and remember CTRL-p CTRL-q
to detach without killing it.docker compose logs -f
can give you a unified view across services.
Images and Building
Images are the blueprints for containers — the templates you pull, inspect, and build from.
They define what’s inside: the OS layer, libraries, binaries, and startup command.
With Docker you don’t build everything from scratch — you usually pull images from a registry and then run or customize them.
- Docker Hub → the default, largest public registry.
- GitHub Container Registry (GHCR) → popular for open-source projects (
ghcr.io/<org>/<image>
). - Quay.io → another common source for official and community images.
If no registry is specified, Docker defaults to Docker Hub.
💡 Tip: Use Notes:Cheatsheet: Images and Building
docker image
(newer) or docker
subcommands (older). Both work, but the image
namespace is more explicit.# === Discovering and Pulling Images ===
# Search for images on Docker Hub
docker search nginx
# Pull the canonical test image
docker pull hello-world
# Run it to verify Docker works
docker run hello-world
# Pull the latest version of nginx from Docker Hub
docker pull nginx
# Pull a specific version/tag of nginx
docker pull nginx:1.27
# Pull from GitHub Container Registry
docker pull ghcr.io/linuxserver/jellyfin:latest
# Pull from Quay.io
docker pull quay.io/coreos/etcd:latest
# === Listing and Inspecting ===
# List all images on your system
docker images
# Show detailed metadata for an image (JSON)
docker inspect nginx
# Show image history (layer breakdown)
docker history nginx
# === Tagging and Naming ===
# Tag an image with a new name (useful for pushing)
docker tag nginx myrepo/mynginx:latest
# Remove one or more images
docker rmi nginx
# === Building Custom Images ===
# Build an image from a Dockerfile in the current directory
docker build -t myapp:latest .
# Build using a custom Dockerfile
docker build -f Dockerfile.dev -t myapp:dev .
# Build without using the cache
docker build --no-cache -t myapp:test .
# === Cleaning Up ===
# Remove dangling (unused) images
docker image prune
# Remove ALL unused images (dangerous if you need old ones)
docker image prune -a
docker pull nginx
, it comes from hub.docker.com/library/nginx
.hello-world
is the standard first test — it prints a confirmation message if Docker is working correctly.Dockerfile
and last update date.postgres:15.2
) — don’t rely on latest
in production.docker history
helps spot bloated layers when an image is huge.docker tag
doesn’t duplicate data — it just adds a new label for an existing image.docker rmi
to clean up, but remember: if a container still depends on the image, it won’t be removed.RUN
command adds a new layer and increases size.docker image prune -a
removes all unused images — only run if you’re sure what’s safe.
Volumes and Persistence
By default, containers are ephemeral — remove a container, and its filesystem is gone.
That’s fine for testing, but not for databases, media servers, or anything with state.
Volumes (and bind mounts) solve this by storing data outside the container’s lifecycle,
so you can rebuild or upgrade without losing important files.
- Named Volumes → managed by Docker (
/var/lib/docker/volumes/...
) - Bind Mounts → link a host path into the container (
/home/user/data:/app/data
)
💡 Tip: Use volumes for long-term data (databases, media libraries). Use bind mounts when you need direct access from the host. Notes:Cheatsheet: Volumes and Persistence
# === Managing Volumes ===
# List all volumes
docker volume ls
# Create a named volume
docker volume create mydata
# Inspect a volume (shows host path, usage)
docker volume inspect mydata
# Remove a volume (careful: data will be deleted)
docker volume rm mydata
# === Using Volumes in Containers ===
# Run postgres with a named volume
docker run -d --name pgtest \
-e POSTGRES_PASSWORD=secret \
-v pgdata:/var/lib/postgresql/data \
postgres:15
# Run nginx with a host bind mount
docker run -d --name webtest \
-v /home/user/html:/usr/share/nginx/html:ro \
-p 8080:80 \
nginx:latest
# Run jellyfin with config + media mounted
docker run -d --name jellyfin \
-v jellyfin-config:/config \
-v /mnt/media:/media \
-p 8096:8096 \
jellyfin/jellyfin
# === Copying and Backing Up Data ===
# Copy data from a volume to host (via a helper container)
docker run --rm -v pgdata:/data -v $(pwd):/backup busybox tar czf /backup/pgdata.tar.gz -C /data .
# Copy files between host and container
docker cp <container_name>:/path/in/container /path/on/host
docker cp /path/on/host <container_name>:/path/in/container
# === Cleanup ===
# Remove all unused volumes (dangling, not attached to containers)
docker volume prune
/var/lib/postgresql/data
for Postgres, /config
for Jellyfin).docker cp
for quick one-off transfers; for backups, use tar
inside a helper container.docker volume prune
only deletes unused volumes — safe, but double-check before running in production.docker run
or docker-compose.yml
with explicit volume mounts so you never lose track of data locations.
Networking
Networking determines how containers talk to each other and to the outside world.
By default, Docker puts containers on an isolated bridge network, but you can create your own or attach containers to the host network directly.
For homelabs, this often means setting up reverse proxies (like NGINX Proxy Manager or Traefik) so you can run many services behind a single IP with SSL.
Key network modes:
- Bridge (default): Containers get an internal IP, NAT-ed through the host.
- Custom bridge: Like bridge, but with user-defined DNS and container-to-container resolution.
- Host: Container shares the host network stack (fast, but less isolation; Linux only).
- Macvlan: Gives a container its own IP on your LAN (advanced homelab setups).
💡 Tip: For multi-container apps, always prefer a custom bridge network. Containers can reach each other by name, and it avoids random port collisions. Notes:Cheatsheet: Networking
# === Inspecting Networks ===
# List all Docker networks
docker network ls
# Inspect a network (shows connected containers, subnets, drivers)
docker network inspect bridge
# === Connecting Containers ===
# Create a custom bridge network
docker network create mynet
# Run containers on the same custom network
docker run -d --name web --network mynet nginx
docker run -d --name app --network mynet busybox sleep 3600
# Connect an existing container to another network
docker network connect mynet <container_name>
# Disconnect a container from a network
docker network disconnect mynet <container_name>
# === Publishing Ports ===
# Map host port 8080 -> container port 80
docker run -d -p 8080:80 nginx
# Map multiple host ports to different container ports
docker run -d -p 8080:80 -p 8443:443 nginx
# === Advanced Networking ===
# Run a container with host networking (Linux only)
docker run -d --network host nginx
# Run a container with macvlan (container gets its own LAN IP)
docker network create -d macvlan \
--subnet=192.168.1.0/24 \
--gateway=192.168.1.1 \
-o parent=eth0 macnet
docker run -d --network macnet --ip 192.168.1.50 nginx
# === Reverse Proxies (Homelab Example) ===
# NGINX Proxy Manager example (manages SSL + hostnames)
docker run -d --name npm \
-v npm-data:/data \
-v npm-ssl:/etc/letsencrypt \
-p 80:80 -p 81:81 -p 443:443 \
jc21/nginx-proxy-manager
# Traefik example (auto-discovers containers by labels)
docker run -d --name traefik \
-v /var/run/docker.sock:/var/run/docker.sock \
-p 80:80 -p 443:443 \
traefik:v2.10
# === Troubleshooting ===
# Show which ports a container is exposing/mapping
docker port <container_name>
# Test connectivity between containers
docker exec <container_name> ping <other_container_name>
docker exec <container_name> nslookup <other_container_name>
# Inspect network traffic/connections inside a container
docker exec <container_name> netstat -tlnp
-p host:container
, host ports must be free; otherwise the container won’t start.--network host
is simple but removes isolation and only works on Linux.docker port
is a fast way to verify published ports without running inspect
.ping
and nslookup
from inside a container to test service discovery on custom networks.netstat
is invaluable for checking what ports a containerized service is really listening on.docker-compose.yml
) can define networks, making multi-service setups easier to reproduce.
Cleanup and Maintenance
Over time, Docker hosts collect cruft: old images, stopped containers, unused volumes, and dangling networks.
Left unchecked, these can eat gigabytes of space and slow down operations.
Docker provides commands to safely prune unused resources and check disk usage so your host stays lean and responsive.
💡 Tip: Always run a system usage check ( Notes:Cheatsheet: Cleanup and Maintenance
docker system df
) before pruning. It shows what’s taking up space and what will be affected.# === System Usage Overview ===
# Show disk usage by images, containers, and volumes
docker system df
# Show detailed disk usage (per image/layer)
docker system df -v
# === Pruning Resources ===
# Remove stopped containers, unused networks, dangling images
docker system prune
# Prune EVERYTHING (containers, images, networks, volumes not in use)
docker system prune -a --volumes
# === Removing Containers ===
# Remove a stopped container
docker rm <container_name>
# Remove ALL stopped containers
docker container prune
# === Removing Images ===
# Remove a specific image
docker rmi <image_id>
# Remove dangling (unnamed) images
docker image prune
# Remove ALL unused images (careful!)
docker image prune -a
# === Removing Volumes ===
# List volumes
docker volume ls
# Remove a specific volume
docker volume rm <volume_name>
# Remove all unused volumes
docker volume prune
# === Removing Networks ===
# List networks
docker network ls
# Remove a specific network
docker network rm <network_name>
# Remove all unused networks
docker network prune
docker system df
is your friend — always check usage before pruning.docker system prune
is relatively safe (it won’t touch volumes or images in use).-a --volumes
will nuke all unused images and volumes — only use if you’re sure.docker container prune
only removes stopped containers, not running ones.docker image prune
removes dangling layers (<none>
tags) — run it often to reclaim space.cron
or systemd timers, but keep backups of critical data.
Copying Data In and Out
Sometimes you need to move files between the host and a container — for quick config edits, pulling logs, or testing scripts.
For long-term persistence, volumes are the right answer. But when you just need to push/pull files on the fly, docker cp
and docker exec
are your tools.
💡 Tip: Always verify container names/IDs first: Notes:Cheatsheet: Copying Data In and Out
docker ps --format "table {{.Names}}\t{{.ID}}\t{{.Status}}"
# === Host → Container ===
# Copy a file from host to container
docker cp ./config.yml <container_name>:/etc/app/config.yml
# Copy a directory from host to container
docker cp ./scripts <container_name>:/usr/local/bin/
# === Container → Host ===
# Copy a file from container to host
docker cp <container_name>:/var/log/app.log ./app.log
# Copy a directory from container to host
docker cp <container_name>:/data ./backup-data
# === Container ↔ Container (via host) ===
# Copy from one container to another (two-step)
docker cp <container_A>:/file ./file
docker cp ./file <container_B>:/file
# === Tar + Exec Trick (large or many files) ===
# Export a directory as tar and extract on host
docker exec <container_name> tar czf - /data > data.tar.gz
# Import tar archive into container
cat data.tar.gz | docker exec -i <container_name> tar xzf - -C /
docker cp
works like scp
: source → destination
. Paths must be absolute inside the container.sudo
or careful mapping.docker cp
; for big/active data, use volumes or bind mounts instead.docker-compose
volumes to manage data consistently across containers.
Exporting and Importing
For migrations and backups, Docker lets you either export containers or save/load images.
- Export/Import works on containers: it snapshots the container’s filesystem (no history, no image metadata).
- Save/Load works on images: it preserves layers, tags, and metadata, making it better for moving images between hosts.
Use export/import when you want a quick one-off copy of a container’s state.
Use save/load when you want to migrate images or move them to another registry/host.
💡 Tip: For long-term data, back up volumes separately — container exports won’t capture external volumes. Notes:Cheatsheet: Exporting and Importing
# === Exporting and Importing Containers ===
# Export a container’s filesystem as a tar archive
docker export <container_name> > container.tar
# Import an exported container as a new image
docker import container.tar newimage:latest
# Import directly from a URL or stdin
curl http://example.com/container.tar | docker import - myimage:tag
# === Saving and Loading Images ===
# Save an image (with all layers and tags) to a tar archive
docker save -o myimage.tar <image_name>:<tag>
# Load an image from a tar archive
docker load -i myimage.tar
# === Practical Example: Migrating Between Hosts ===
# On source host: save an image
docker save -o nginx.tar nginx:1.27
# Copy file to new host (scp, rsync, USB, etc.)
scp nginx.tar user@newhost:/tmp/
# On destination host: load image
docker load -i /tmp/nginx.tar
# Run it again on the new host
docker run -d -p 8080:80 nginx:1.27
docker export
flattens a container into a single tarball — history, environment variables, and build metadata are lost.docker import
creates a new image from that tarball, but without tags/history it’s more like a snapshot.docker save
preserves tags, layers, and history — use this for migrating or archiving images.docker load
restores an image exactly as it was, ready to run again.docker volume
or tar
tricks).docker compose
plus volume backups is usually the better migration path.
Diagnostics and Troubleshooting
When containers misbehave, Docker gives you tools to peek under the hood and figure out what’s going wrong.
Whether it’s a crash loop, a port conflict, or networking issues, these commands help you diagnose problems without guesswork.
💡 Tip: Start with Notes:Cheatsheet: Diagnostics and Troubleshooting
docker ps -a
and docker logs <container>
— they solve 80% of issues before you dive deeper.# === Container Health and Status ===
# Show container status and restart counts
docker ps -a --format "table {{.Names}}\t{{.Status}}\t{{.State}}\t{{.Ports}}"
# Inspect restart count (useful for crash loops)
docker inspect -f '{{.RestartCount}}' <container_name>
# Inspect container healthcheck status
docker inspect -f '{{.State.Health.Status}}' <container_name>
# === Logs and Events ===
# Show container logs
docker logs <container_name>
# Follow logs in real-time
docker logs -f <container_name>
# Show Docker daemon events (global activity)
docker events
# === Inspecting Deep Details ===
# Inspect container details (JSON: env vars, mounts, networks)
docker inspect <container_name>
# Inspect a specific property (e.g., IP address)
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <container_name>
# List environment variables for a container
docker inspect -f '{{.Config.Env}}' <container_name>
# === Resource and Process Debugging ===
# Show live resource usage (CPU, memory, I/O)
docker stats <container_name>
# Show processes running inside a container
docker top <container_name>
# Run ps inside a container (for more detail)
docker exec <container_name> ps aux
# === Networking Issues ===
# Show which ports a container is exposing
docker port <container_name>
# Test connectivity to another container
docker exec <container_name> ping <other_container_name>
# DNS resolution test inside container
docker exec <container_name> nslookup <other_container_name>
docker ps -a
shows stopped containers — always check here if something exited unexpectedly.docker inspect -f '{{.RestartCount}}'
). If it’s climbing, check logs for the root cause.healthy
, unhealthy
, or starting
.docker events
is noisy but invaluable for catching container start/stop in real time.docker inspect
with Go templates to extract just what you need (IP, env vars, restart count).docker stats
helps spot containers hogging CPU or memory.-p 80:80
, check if the host already has something bound to that port.docker compose ps
and docker compose logs
give you a stack-wide view.
Docker Compose Basics

Managing one container at a time works, but most real apps involve multiple services — databases, web servers, caching layers, proxies.
Docker Compose makes this practical by letting you define and run multi-container applications with a single YAML file.
Instead of juggling long docker run ...
commands for each container, you declare them once in docker-compose.yml
and bring the whole stack up with one command: docker compose up -d
With Compose, adding a database, cache, or proxy isn’t extra terminal clutter — it’s just another service block in the YAML.
You’ll often see guides and project docs include a docker-compose.yml
with multiple containers preconfigured, because it’s the easiest way to share and reproduce complex setups.
How it works:
- You define services (containers) in
docker-compose.yml
. - Compose automatically creates a dedicated network so containers can talk by name.
- Volumes and environment variables are declared in one place for consistency.
- Stacks can be version-controlled and shared across machines.
Example: Single Service
version: "3.9"
services:
web:
image: nginx:latest
ports:
- "8080:80"
Bring it up:
docker compose up -d
Where to put docker-compose.yml
:
Save the file in a project folder (for example ~/codelab/infra-stacks/myapp/
).
The working directory is important: when you run docker compose up
, Compose looks for docker-compose.yml
in the current directory by default.
If your YAML file lives elsewhere, you can point Compose at it with -f
:
mkdir -p ~/codelab/infra-stacks/myapp
cd ~/codelab/infra-stacks/myapp
nano docker-compose.yml # paste the config here
# Run from the same directory
docker compose up -d
# Or specify the file explicitly
docker compose -f ~/codelab/infra-stacks/myapp/docker-compose.yml up -d
Example: Multi-Service (Web + Database)
version: "3.9"
services:
db:
image: postgres:15
environment:
POSTGRES_PASSWORD: secret
volumes:
- pgdata:/var/lib/postgresql/data
web:
image: nginx:latest
ports:
- "8080:80"
depends_on:
- db
volumes:
pgdata:
.env Files for Configuration
Compose automatically reads a .env
file in the same directory as docker-compose.yml
.
This is a better way to handle secrets and environment settings:
# .env file
POSTGRES_PASSWORD=secret
DB_VERSION=15
Then you can reference the env variables in docker-compose.yml
:
version: "3.9"
services:
db:
image: postgres:${DB_VERSION}
environment:
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
Essential Compose Commands
# === Essential Compose Commands ===
# Bring stack up in background
docker compose up -d
# Stop and remove containers (keeps volumes)
docker compose down
# Stop, remove containers AND volumes (destructive!)
docker compose down -v
# View logs from all services
docker compose logs -f
# Restart a specific service
docker compose restart web
# Rebuild images before starting
docker compose up -d --build
Docs and Resources:
Notes:
-
Compose integrates with the same Docker engine — no separate install needed on modern Docker (v20+).
-
Older systems may need the standalone
docker-compose
binary. -
For homelabs, Compose makes backups, restores, and migrations easier (just copy the YAML + volumes).
-
You can scale services with one command:
docker compose up --scale web=3 -d
This runs three copies of the
web
service (web_1
,web_2
,web_3
) using the same definition from your Compose file. Other containers on the same network can talk toweb
and Docker will spread requests across them, though exposing ports usually requires a proxy to handle load-balancing. -
Always pin image tags in Compose files (e.g.
postgres:15.2
) — don’t rely onlatest
. -
Track
docker-compose.yml
in Git, but exclude volume data — it belongs on disk, not in version control.
Docker Desktop
For macOS and Windows, Docker Desktop is the simplest way to get started.
It bundles the Docker Engine, CLI, and a GUI dashboard in one installer.

Notes:
- Docker Desktop is perfect for local dev/testing.
- On Linux, install Docker Engine directly instead (Desktop isn’t needed).
- Resource usage can be heavy — tweak CPU/RAM under Preferences → Resources.
- Once comfortable, you can move stacks to a server (like
gir.darkstar.home
) usingdocker compose
or Portainer.
Portainer Setup
Portainer is a lightweight web UI that makes Docker approachable — it’s the first thing I install whenever I set up Docker. Think of it as a control panel: manage containers, stacks, networks, and volumes without memorizing CLI flags.

Install Portainer CE:
docker volume create portainer_data
docker run -d \
-p 8000:8000 -p 9000:9000 -p 9443:9443 \
--name=portainer \
--restart=always \
-v /var/run/docker.sock:/var/run/docker.sock \
-v portainer_data:/data \
portainer/portainer-ce:latest
First Steps:
- Visit
https://<host-ip>:9443
in a browser. - Create the admin user.
- Connect to the local environment.
Deploying a Stack:
- In Portainer → Stacks → Add Stack.
- Paste your
docker-compose.yml
(e.g. the nginx + postgres example). - Hit Deploy the stack.



Pro-Tip: Portainer shows the YAML it generates. Copy that into Git (like ~/codelab/infra-stacks/
) to version your setup.
Links:
Dockerfiles and making your own containers
It’s one thing to understand that “a container is a little computer inside your computer” — but it really clicks once you build one yourself. You don’t need Kubernetes or a whole stack of services to see how it works. It can be as simple as combining a tiny script and a Dockerfile.
The idea is simple:
- A script or program you want to run.
- A Dockerfile — think of it as a lightweight provisioning script. Each line makes a small, understandable change to a base OS image:
- pick a starting point (
FROM python:3.12-slim
) - copy in files (
COPY smurfify.py .
) - install dependencies (
RUN pip install -r requirements.txt
orapt-get install curl
) - set the default command (
ENTRYPOINT [...]
)
- pick a starting point (
docker build
to turn that recipe into an image.docker run
to spin up a container from the image.
That’s it: script → image → container. Once you’ve seen that loop, the rest of Docker makes sense.
The beauty of this model is that you ship a minimal system tailored to your code, instead of hoping it behaves on whatever messy production environment it lands in. The old excuse “it works on my machine” becomes a feature — because with Docker you’re packaging your machine with the code. Unlike a full virtual machine, containers don’t need their own kernel or OS image; they just add the few layers your app requires. That makes them lightweight, portable, and consistent no matter where they run.
If you’d like to see this process step by step, I put together a companion guide where we take a small script (smurfify.py) and wrap it in Docker:
📚 Essential Documentation & Resources
Official Documentation
- Docker Docs: https://docs.docker.com/
- Dockerfile Reference: https://docs.docker.com/engine/reference/builder/
- Docker Compose: https://docs.docker.com/compose/
- Docker Hub: https://hub.docker.com/
Best Practices Guides
- Docker Best Practices: https://docs.docker.com/develop/dev-best-practices/
- Security Best Practices: https://docs.docker.com/engine/security/
- Production Deployment: https://docs.docker.com/engine/swarm/
Community Resources
- Docker Community: https://www.docker.com/community/
- Stack Overflow Docker Tag: https://stackoverflow.com/questions/tagged/docker
- r/docker: https://reddit.com/r/docker
Conclusion

I hope you’ve found this little Docker Grimoire useful. With a few concepts, the commands and some examples, you’ll be dockering with the best of them in no time.
If you have any tips, tricks, or feedback please feel free to reach out: feedback@adminjitsu.com