Intro

Docker has become one of those critical technologies powering everything from scrappy homelabs to enterprise infrastructure. It can feel intimidating at first—so many new terms, layers, and tools—but armed with a good cheatsheet and the right explanations, Docker turns out to be surprisingly fun and almost magically powerful.

With it, you can host an entire network of services on your own machine: media servers like Jellyfin and Immich, full LAMP stacks for web development, databases, your own git server or even your own custom scripts packaged and deployed cleanly.

In this guide we’ll compile commands, concepts, official docs, and practical examples into one handy resource. Because Docker is less confusing than it seems—and with a few best practices, it can become one of the most reliable and versatile parts of your network.

What is Docker?

At its core, Docker is a way to run little computers inside your computer.

Each Docker container is its own lightweight environment, with just enough OS, libraries, and binaries to do one job well—whether that’s serving a web app, crunching data, or running a database. Instead of cluttering your base OS with packages and configs, you pull a prebuilt image and run it in isolation.

Technically speaking, Docker builds on core Linux kernel features like namespaces, cgroups, and union (overlay) filesystems to provide process isolation and resource management. Unlike traditional virtualization—which uses a hypervisor to run full guest operating systems on abstracted hardware—Docker employs OS-level virtualization (containerization), sharing the host kernel across containers and making it far more lightweight and efficient.

A bit of history:

  • Docker was created in 2013 by Solomon Hykes and the team at dotCloud (a PaaS startup), and has since evolved into a massive open-source ecosystem.
  • It’s primarily written in Go, with CLI tooling and APIs that work across Linux, Windows, and macOS.
  • In just over a decade, it’s become a near-ubiquitous layer in modern development and operations: used in CI/CD pipelines, cloud deployments, and, increasingly, personal projects and homelabs.

Why is it so popular? A few key benefits:

  • Isolation without the overhead of full VMs.
  • Consistency across environments: the same image runs on your laptop, server, or in the cloud.
  • Portability and sharing: Docker Hub and registries make distributing software trivial.
  • Resource efficiency: containers spin up in seconds and can be packed densely on a single host.
  • Flexibility: from one-off experiments to long-lived production services.

Today, Docker shows up everywhere—from billion-dollar cloud platforms to basement labs. On my own server gir.darkstar.home, I run over twenty containers: Jellyfin, Portainer, Dashy, Uptime Kuma, Gitea, Jupyter Labs and more. It keeps everything clean and manageable and easy to backup. I kind of love Docker!

In the sections ahead, we’ll start with a practical cheatsheet of Docker commands, then build toward hands-on tutorials: installing Portainer, containerizing a script, updating a service without losing data, and even migrating containers to a new host.


Usage: The Docker CLI Cheatsheet

a wise old wizard. video game style pixel art
Unlike stage magicians, Unix wizards are happy to share their tricks!

Even if you lean on Docker Desktop, Portainer, or other GUIs, the command line is always there.
It’s the universal interface: every container operation can be expressed at the CLI,
whether you’re hacking on a laptop, administering a headless server, or scripting a CI/CD pipeline.

If you can run it from the CLI, you can automate it, debug it, or wrap it in a script later.

This cheatsheet is organized into task-oriented sections, each with:

  • A quick explanation of why the task matters
  • A collapsible block of relevant commands
  • Footnotes, Usage caveats and more

Commands may appear in more than one section if they’re useful for that workflow.
By the end, you’ll have a mental map of what to reach for, whether you’re checking logs, moving files,
or pruning old containers that are eating up disk space.


Checking What’s Running

The first step in container management is knowing what’s happening right now.
Which containers are up, which have stopped, and how much load they’re putting on your system?
This section gives you the heartbeat of your Docker host — status, ports, resource usage, and quick inspections.

Cheatsheet: Checking What’s Running
# List running containers (default view)
docker ps

# Show ALL containers (running + stopped)
docker ps -a

# Quiet mode (IDs only) — handy for scripting
docker ps -q

# Filter example: show only stopped/exited containers
docker ps -f "status=exited"

# Custom table view: names, status, ports
docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"

# Live resource usage (CPU, MEM, I/O, PIDs)
docker stats

# One-shot resource snapshot (no stream)
docker stats --no-stream

# Show port mappings for a container
docker port <container_name>

# Show processes running inside a container
docker top <container_name>

# Inspect container details (JSON: state, mounts, networks)
docker inspect <container_name>

# Quick: container IP address
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <container_name>

# Quick: restart count (useful for crash loops)
docker inspect -f '{{.RestartCount}}' <container_name>

Notes:

  • docker ps shows only running containers; add -a to include stopped/exited ones.
  • Use --format for clean custom output; in PowerShell, wrap the template in single quotes.
  • docker stats streams by default — use --no-stream for scripts or snapshots.
  • docker port only shows published (host-exposed) ports; for internal-only, check the container network.
  • docker top is a quick sanity check for runaway processes without attaching a shell.
  • For storage usage (docker system df), see the Cleanup & Maintenance section.


Starting, Stopping, and Restarting

Once you’ve identified a container, the next step is controlling its lifecycle.
Containers aren’t meant to be long-lived processes in the same sense as system daemons; they’re designed to start fast, stop cleanly, and be replaced when needed.
Knowing how to start, stop, restart, and remove containers lets you manage your stack gracefully instead of reaching for host-level reboots or kill -9.

Cheatsheet: Starting, Stopping, and Restarting

💡 Tip: If you don’t know the container’s name or ID, list them first:

docker ps -a --format "table {{.Names}}\t{{.ID}}\t{{.Status}}"

# Start a stopped container
docker start <container_name>

# Stop a running container (graceful SIGTERM, then SIGKILL after timeout)
docker stop <container_name>

# Force-stop immediately (no grace period)
docker kill <container_name>

# Restart a container (stop + start)
docker restart <container_name>

# Pause/resume all processes in a container
docker pause <container_name>
docker unpause <container_name>

# Remove (delete) a container entirely
docker rm <container_name>

# Remove a container that’s still running (force)
docker rm -f <container_name>

# Automatically remove a container after it exits (good for throwaway/test runs)
docker run --rm <image>

# Start a new container in the background (detached mode)
docker run -d --name <name> <image>

# Start a new container and attach to it interactively
docker run -it --name <name> <image> /bin/bash

Notes:

  • docker ps -a shows both running and stopped containers — use this before lifecycle actions.
  • docker stop tries a graceful shutdown (default 10s timeout) before sending SIGKILL.
  • Use docker kill if you need immediate termination, but it skips cleanup inside the container.
  • docker rm deletes only the container metadata; images and volumes remain until explicitly removed.
  • docker rm -f is a shortcut for “stop + remove” but can interrupt clean shutdowns.
  • docker pause is niche but useful when you want to freeze a container’s CPU/memory usage without stopping it.
  • --rm is great for temporary containers (debug shells, quick tests) since they clean up after themselves.
  • For production services, prefer docker run -d with a name and mapped ports so it persists and can be restarted later.


Watching What Containers Are Doing

Once a container is running, you’ll often need to peek inside.
Maybe it’s to check logs for errors, monitor activity in real time, or open a shell to troubleshoot directly.
Think of this as your observation toolkit — the ways to see what’s happening under the hood without tearing the container apart.

Cheatsheet: Watching What Containers Are Doing

💡 Tip: First, list container names/IDs so you know what to target:

docker ps --format "table {{.Names}}\t{{.ID}}\t{{.Status}}"
# === Logs ===

# Follow logs in real time (like `tail -f`)
docker logs -f <container_name>

# Show the last 50 log lines
docker logs --tail 50 <container_name>

# Show logs since a specific timestamp (ISO 8601 or "1h" for last hour)
docker logs --since 1h <container_name>


# === Interactive Access ===

# Open a shell inside a running container (bash if available)
docker exec -it <container_name> /bin/bash

# Fallback: sh is more universally present
docker exec -it <container_name> sh

# Run a one-off command inside a container
docker exec -it <container_name> <command>


# === Resource Monitoring ===

# Live resource usage (CPU, memory, network, disk I/O) for one container
docker stats <container_name>

# Show resource usage for all running containers
docker stats


# === Process Inspection ===

# Show processes running inside a container
docker exec <container_name> ps aux


# === File System Inspection ===

# Browse container filesystem from outside
docker exec <container_name> ls -la /app

# Search for config files inside container
docker exec <container_name> find /etc -name "*.conf"


# === Attach to Main Process ===

# Attach directly to the container’s main process (stdout/stderr)
docker attach <container_name>

# Detach safely with CTRL-p + CTRL-q (not CTRL-c!)
# CTRL-c will usually stop the container

Notes:

  • docker logs only shows what the container’s main process writes to stdout/stderr. For apps writing to files, use docker exec.
  • Combine --tail and --since to make logs manageable for long-running services.
  • docker exec is the safest way to explore — it runs a new process inside without disturbing the main one.
  • docker stats is great for spotting runaway containers eating CPU or memory. Add --no-stream for a single snapshot.
  • For quick process checks, docker top <container> also works, but docker exec ... ps aux gives more familiar detail.
  • File browsing (ls, find) via docker exec is invaluable for checking mounts, configs, or missing files.
  • docker attach ties you to the main process — only use if you know what you’re doing, and remember CTRL-p CTRL-q to detach without killing it.
  • With Docker Compose, docker compose logs -f can give you a unified view across services.


Images and Building

Images are the blueprints for containers — the templates you pull, inspect, and build from.
They define what’s inside: the OS layer, libraries, binaries, and startup command.
With Docker you don’t build everything from scratch — you usually pull images from a registry and then run or customize them.

If no registry is specified, Docker defaults to Docker Hub.

Cheatsheet: Images and Building

💡 Tip: Use docker image (newer) or docker subcommands (older). Both work, but the image namespace is more explicit.

# === Discovering and Pulling Images ===

# Search for images on Docker Hub
docker search nginx

# Pull the canonical test image
docker pull hello-world

# Run it to verify Docker works
docker run hello-world

# Pull the latest version of nginx from Docker Hub
docker pull nginx

# Pull a specific version/tag of nginx
docker pull nginx:1.27

# Pull from GitHub Container Registry
docker pull ghcr.io/linuxserver/jellyfin:latest

# Pull from Quay.io
docker pull quay.io/coreos/etcd:latest


# === Listing and Inspecting ===

# List all images on your system
docker images

# Show detailed metadata for an image (JSON)
docker inspect nginx

# Show image history (layer breakdown)
docker history nginx


# === Tagging and Naming ===

# Tag an image with a new name (useful for pushing)
docker tag nginx myrepo/mynginx:latest

# Remove one or more images
docker rmi nginx


# === Building Custom Images ===

# Build an image from a Dockerfile in the current directory
docker build -t myapp:latest .

# Build using a custom Dockerfile
docker build -f Dockerfile.dev -t myapp:dev .

# Build without using the cache
docker build --no-cache -t myapp:test .


# === Cleaning Up ===

# Remove dangling (unused) images
docker image prune

# Remove ALL unused images (dangerous if you need old ones)
docker image prune -a

Notes:

  • Finding images: Docker Hub is the default registry. If you run docker pull nginx, it comes from hub.docker.com/library/nginx.
  • hello-world is the standard first test — it prints a confirmation message if Docker is working correctly.
  • Use official images when possible (they’re verified and maintained). Community images can be great, but check the Dockerfile and last update date.
  • Pin to a specific tag (e.g. postgres:15.2) — don’t rely on latest in production.
  • docker history helps spot bloated layers when an image is huge.
  • docker tag doesn’t duplicate data — it just adds a new label for an existing image.
  • Use docker rmi to clean up, but remember: if a container still depends on the image, it won’t be removed.
  • Keep your Dockerfiles minimal. Each RUN command adds a new layer and increases size.
  • docker image prune -a removes all unused images — only run if you’re sure what’s safe.


Volumes and Persistence

By default, containers are ephemeral — remove a container, and its filesystem is gone.
That’s fine for testing, but not for databases, media servers, or anything with state.
Volumes (and bind mounts) solve this by storing data outside the container’s lifecycle,
so you can rebuild or upgrade without losing important files.

  • Named Volumes → managed by Docker (/var/lib/docker/volumes/...)
  • Bind Mounts → link a host path into the container (/home/user/data:/app/data)

Cheatsheet: Volumes and Persistence

💡 Tip: Use volumes for long-term data (databases, media libraries). Use bind mounts when you need direct access from the host.

# === Managing Volumes ===

# List all volumes
docker volume ls

# Create a named volume
docker volume create mydata

# Inspect a volume (shows host path, usage)
docker volume inspect mydata

# Remove a volume (careful: data will be deleted)
docker volume rm mydata


# === Using Volumes in Containers ===

# Run postgres with a named volume
docker run -d --name pgtest \
  -e POSTGRES_PASSWORD=secret \
  -v pgdata:/var/lib/postgresql/data \
  postgres:15

# Run nginx with a host bind mount
docker run -d --name webtest \
  -v /home/user/html:/usr/share/nginx/html:ro \
  -p 8080:80 \
  nginx:latest

# Run jellyfin with config + media mounted
docker run -d --name jellyfin \
  -v jellyfin-config:/config \
  -v /mnt/media:/media \
  -p 8096:8096 \
  jellyfin/jellyfin


# === Copying and Backing Up Data ===

# Copy data from a volume to host (via a helper container)
docker run --rm -v pgdata:/data -v $(pwd):/backup busybox tar czf /backup/pgdata.tar.gz -C /data .

# Copy files between host and container
docker cp <container_name>:/path/in/container /path/on/host
docker cp /path/on/host <container_name>:/path/in/container


# === Cleanup ===

# Remove all unused volumes (dangling, not attached to containers)
docker volume prune

Notes:

  • Named volumes are best for portability — Docker manages them and they survive container removal.
  • Bind mounts are direct host paths; they’re flexible but can break if paths don’t exist or permissions mismatch.
  • Always mount database directories (e.g. /var/lib/postgresql/data for Postgres, /config for Jellyfin).
  • Use docker cp for quick one-off transfers; for backups, use tar inside a helper container.
  • docker volume prune only deletes unused volumes — safe, but double-check before running in production.
  • Pro tip: version-control your docker run or docker-compose.yml with explicit volume mounts so you never lose track of data locations.


Networking

Networking determines how containers talk to each other and to the outside world.
By default, Docker puts containers on an isolated bridge network, but you can create your own or attach containers to the host network directly.
For homelabs, this often means setting up reverse proxies (like NGINX Proxy Manager or Traefik) so you can run many services behind a single IP with SSL.

Key network modes:

  • Bridge (default): Containers get an internal IP, NAT-ed through the host.
  • Custom bridge: Like bridge, but with user-defined DNS and container-to-container resolution.
  • Host: Container shares the host network stack (fast, but less isolation; Linux only).
  • Macvlan: Gives a container its own IP on your LAN (advanced homelab setups).

Cheatsheet: Networking

💡 Tip: For multi-container apps, always prefer a custom bridge network. Containers can reach each other by name, and it avoids random port collisions.

# === Inspecting Networks ===

# List all Docker networks
docker network ls

# Inspect a network (shows connected containers, subnets, drivers)
docker network inspect bridge


# === Connecting Containers ===

# Create a custom bridge network
docker network create mynet

# Run containers on the same custom network
docker run -d --name web --network mynet nginx
docker run -d --name app --network mynet busybox sleep 3600

# Connect an existing container to another network
docker network connect mynet <container_name>

# Disconnect a container from a network
docker network disconnect mynet <container_name>


# === Publishing Ports ===

# Map host port 8080 -> container port 80
docker run -d -p 8080:80 nginx

# Map multiple host ports to different container ports
docker run -d -p 8080:80 -p 8443:443 nginx


# === Advanced Networking ===

# Run a container with host networking (Linux only)
docker run -d --network host nginx

# Run a container with macvlan (container gets its own LAN IP)
docker network create -d macvlan \
  --subnet=192.168.1.0/24 \
  --gateway=192.168.1.1 \
  -o parent=eth0 macnet

docker run -d --network macnet --ip 192.168.1.50 nginx


# === Reverse Proxies (Homelab Example) ===

# NGINX Proxy Manager example (manages SSL + hostnames)
docker run -d --name npm \
  -v npm-data:/data \
  -v npm-ssl:/etc/letsencrypt \
  -p 80:80 -p 81:81 -p 443:443 \
  jc21/nginx-proxy-manager

# Traefik example (auto-discovers containers by labels)
docker run -d --name traefik \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -p 80:80 -p 443:443 \
  traefik:v2.10


# === Troubleshooting ===

# Show which ports a container is exposing/mapping
docker port <container_name>

# Test connectivity between containers
docker exec <container_name> ping <other_container_name>
docker exec <container_name> nslookup <other_container_name>

# Inspect network traffic/connections inside a container
docker exec <container_name> netstat -tlnp

Notes:

  • The default bridge network doesn’t allow DNS-based container name resolution; use a custom bridge for that.
  • With -p host:container, host ports must be free; otherwise the container won’t start.
  • --network host is simple but removes isolation and only works on Linux.
  • Macvlan lets a container appear as a separate device on your LAN (great for media servers that need their own IP).
  • Reverse proxies like NGINX Proxy Manager or Traefik let you host many services behind one IP/SSL cert — perfect for homelabs.
  • Always secure reverse proxies with SSL certs (Let’s Encrypt support is built into both NPM and Traefik).
  • docker port is a fast way to verify published ports without running inspect.
  • Use ping and nslookup from inside a container to test service discovery on custom networks.
  • netstat is invaluable for checking what ports a containerized service is really listening on.
  • Compose files (docker-compose.yml) can define networks, making multi-service setups easier to reproduce.


Cleanup and Maintenance

Over time, Docker hosts collect cruft: old images, stopped containers, unused volumes, and dangling networks.
Left unchecked, these can eat gigabytes of space and slow down operations.
Docker provides commands to safely prune unused resources and check disk usage so your host stays lean and responsive.

Cheatsheet: Cleanup and Maintenance

💡 Tip: Always run a system usage check (docker system df) before pruning. It shows what’s taking up space and what will be affected.

# === System Usage Overview ===

# Show disk usage by images, containers, and volumes
docker system df

# Show detailed disk usage (per image/layer)
docker system df -v


# === Pruning Resources ===

# Remove stopped containers, unused networks, dangling images
docker system prune

# Prune EVERYTHING (containers, images, networks, volumes not in use)
docker system prune -a --volumes


# === Removing Containers ===

# Remove a stopped container
docker rm <container_name>

# Remove ALL stopped containers
docker container prune


# === Removing Images ===

# Remove a specific image
docker rmi <image_id>

# Remove dangling (unnamed) images
docker image prune

# Remove ALL unused images (careful!)
docker image prune -a


# === Removing Volumes ===

# List volumes
docker volume ls

# Remove a specific volume
docker volume rm <volume_name>

# Remove all unused volumes
docker volume prune


# === Removing Networks ===

# List networks
docker network ls

# Remove a specific network
docker network rm <network_name>

# Remove all unused networks
docker network prune

Notes:

  • docker system df is your friend — always check usage before pruning.
  • docker system prune is relatively safe (it won’t touch volumes or images in use).
  • Adding -a --volumes will nuke all unused images and volumes — only use if you’re sure.
  • docker container prune only removes stopped containers, not running ones.
  • docker image prune removes dangling layers (<none> tags) — run it often to reclaim space.
  • Volumes are persistent by design; pruning them is irreversible. Back up important volumes before cleanup.
  • Networks usually don’t take much space, but unused ones can clutter output.
  • Pro tip: automate periodic cleanup with cron or systemd timers, but keep backups of critical data.


Copying Data In and Out

Sometimes you need to move files between the host and a container — for quick config edits, pulling logs, or testing scripts.
For long-term persistence, volumes are the right answer. But when you just need to push/pull files on the fly, docker cp and docker exec are your tools.

Cheatsheet: Copying Data In and Out

💡 Tip: Always verify container names/IDs first:

docker ps --format "table {{.Names}}\t{{.ID}}\t{{.Status}}"
# === Host → Container ===

# Copy a file from host to container
docker cp ./config.yml <container_name>:/etc/app/config.yml

# Copy a directory from host to container
docker cp ./scripts <container_name>:/usr/local/bin/


# === Container → Host ===

# Copy a file from container to host
docker cp <container_name>:/var/log/app.log ./app.log

# Copy a directory from container to host
docker cp <container_name>:/data ./backup-data


# === Container ↔ Container (via host) ===

# Copy from one container to another (two-step)
docker cp <container_A>:/file ./file
docker cp ./file <container_B>:/file


# === Tar + Exec Trick (large or many files) ===

# Export a directory as tar and extract on host
docker exec <container_name> tar czf - /data > data.tar.gz

# Import tar archive into container
cat data.tar.gz | docker exec -i <container_name> tar xzf - -C /

Notes:

  • docker cp works like scp: source → destination. Paths must be absolute inside the container.
  • Permissions inside the container may differ — root-owned files may need sudo or careful mapping.
  • Copying large datasets is slower with docker cp; for big/active data, use volumes or bind mounts instead.
  • The tar+exec trick is faster for large directories or when preserving permissions is critical.
  • For multi-service stacks, prefer docker-compose volumes to manage data consistently across containers.


Exporting and Importing

For migrations and backups, Docker lets you either export containers or save/load images.

  • Export/Import works on containers: it snapshots the container’s filesystem (no history, no image metadata).
  • Save/Load works on images: it preserves layers, tags, and metadata, making it better for moving images between hosts.

Use export/import when you want a quick one-off copy of a container’s state.
Use save/load when you want to migrate images or move them to another registry/host.

Cheatsheet: Exporting and Importing

💡 Tip: For long-term data, back up volumes separately — container exports won’t capture external volumes.

# === Exporting and Importing Containers ===

# Export a container’s filesystem as a tar archive
docker export <container_name> > container.tar

# Import an exported container as a new image
docker import container.tar newimage:latest

# Import directly from a URL or stdin
curl http://example.com/container.tar | docker import - myimage:tag


# === Saving and Loading Images ===

# Save an image (with all layers and tags) to a tar archive
docker save -o myimage.tar <image_name>:<tag>

# Load an image from a tar archive
docker load -i myimage.tar


# === Practical Example: Migrating Between Hosts ===

# On source host: save an image
docker save -o nginx.tar nginx:1.27

# Copy file to new host (scp, rsync, USB, etc.)
scp nginx.tar user@newhost:/tmp/

# On destination host: load image
docker load -i /tmp/nginx.tar

# Run it again on the new host
docker run -d -p 8080:80 nginx:1.27

Notes:

  • docker export flattens a container into a single tarball — history, environment variables, and build metadata are lost.
  • docker import creates a new image from that tarball, but without tags/history it’s more like a snapshot.
  • docker save preserves tags, layers, and history — use this for migrating or archiving images.
  • docker load restores an image exactly as it was, ready to run again.
  • Neither export nor save captures volumes — always back those up separately (docker volume or tar tricks).
  • For multi-container stacks, docker compose plus volume backups is usually the better migration path.


Diagnostics and Troubleshooting

When containers misbehave, Docker gives you tools to peek under the hood and figure out what’s going wrong.
Whether it’s a crash loop, a port conflict, or networking issues, these commands help you diagnose problems without guesswork.

Cheatsheet: Diagnostics and Troubleshooting

💡 Tip: Start with docker ps -a and docker logs <container> — they solve 80% of issues before you dive deeper.

# === Container Health and Status ===

# Show container status and restart counts
docker ps -a --format "table {{.Names}}\t{{.Status}}\t{{.State}}\t{{.Ports}}"

# Inspect restart count (useful for crash loops)
docker inspect -f '{{.RestartCount}}' <container_name>

# Inspect container healthcheck status
docker inspect -f '{{.State.Health.Status}}' <container_name>


# === Logs and Events ===

# Show container logs
docker logs <container_name>

# Follow logs in real-time
docker logs -f <container_name>

# Show Docker daemon events (global activity)
docker events


# === Inspecting Deep Details ===

# Inspect container details (JSON: env vars, mounts, networks)
docker inspect <container_name>

# Inspect a specific property (e.g., IP address)
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <container_name>

# List environment variables for a container
docker inspect -f '{{.Config.Env}}' <container_name>


# === Resource and Process Debugging ===

# Show live resource usage (CPU, memory, I/O)
docker stats <container_name>

# Show processes running inside a container
docker top <container_name>

# Run ps inside a container (for more detail)
docker exec <container_name> ps aux


# === Networking Issues ===

# Show which ports a container is exposing
docker port <container_name>

# Test connectivity to another container
docker exec <container_name> ping <other_container_name>

# DNS resolution test inside container
docker exec <container_name> nslookup <other_container_name>

Notes:

  • docker ps -a shows stopped containers — always check here if something exited unexpectedly.
  • Crash loops? Look at the restart count (docker inspect -f '{{.RestartCount}}'). If it’s climbing, check logs for the root cause.
  • Healthchecks (if defined in the image) will show as healthy, unhealthy, or starting.
  • docker events is noisy but invaluable for catching container start/stop in real time.
  • Use docker inspect with Go templates to extract just what you need (IP, env vars, restart count).
  • docker stats helps spot containers hogging CPU or memory.
  • Networking issues are often DNS-related — custom networks allow name resolution, default bridge does not.
  • Port conflicts are common: if a container won’t start with -p 80:80, check if the host already has something bound to that port.
  • For multi-service apps, docker compose ps and docker compose logs give you a stack-wide view.


Docker Compose Basics

a pixel art lich wearing a crown and holding a staff with a green gem
Mastering Docker doesn't have to be a boss battle

Managing one container at a time works, but most real apps involve multiple services — databases, web servers, caching layers, proxies.

Docker Compose makes this practical by letting you define and run multi-container applications with a single YAML file.
Instead of juggling long docker run ... commands for each container, you declare them once in docker-compose.yml and bring the whole stack up with one command: docker compose up -d

With Compose, adding a database, cache, or proxy isn’t extra terminal clutter — it’s just another service block in the YAML.
You’ll often see guides and project docs include a docker-compose.yml with multiple containers preconfigured, because it’s the easiest way to share and reproduce complex setups.


How it works:

  • You define services (containers) in docker-compose.yml.
  • Compose automatically creates a dedicated network so containers can talk by name.
  • Volumes and environment variables are declared in one place for consistency.
  • Stacks can be version-controlled and shared across machines.

Example: Single Service

version: "3.9"
services:
  web:
    image: nginx:latest
    ports:
      - "8080:80"

Bring it up:

docker compose up -d

Where to put docker-compose.yml:
Save the file in a project folder (for example ~/codelab/infra-stacks/myapp/).
The working directory is important: when you run docker compose up, Compose looks for docker-compose.yml in the current directory by default.
If your YAML file lives elsewhere, you can point Compose at it with -f:

mkdir -p ~/codelab/infra-stacks/myapp
cd ~/codelab/infra-stacks/myapp
nano docker-compose.yml   # paste the config here

# Run from the same directory
docker compose up -d

# Or specify the file explicitly
docker compose -f ~/codelab/infra-stacks/myapp/docker-compose.yml up -d

Example: Multi-Service (Web + Database)

version: "3.9"
services:
  db:
    image: postgres:15
    environment:
      POSTGRES_PASSWORD: secret
    volumes:
      - pgdata:/var/lib/postgresql/data

  web:
    image: nginx:latest
    ports:
      - "8080:80"
    depends_on:
      - db

volumes:
  pgdata:

.env Files for Configuration
Compose automatically reads a .env file in the same directory as docker-compose.yml.
This is a better way to handle secrets and environment settings:

# .env file
POSTGRES_PASSWORD=secret
DB_VERSION=15

Then you can reference the env variables in docker-compose.yml:

version: "3.9"
services:
  db:
    image: postgres:${DB_VERSION}
    environment:
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}

Essential Compose Commands

# === Essential Compose Commands ===

# Bring stack up in background
docker compose up -d

# Stop and remove containers (keeps volumes)
docker compose down

# Stop, remove containers AND volumes (destructive!)
docker compose down -v

# View logs from all services
docker compose logs -f

# Restart a specific service
docker compose restart web

# Rebuild images before starting
docker compose up -d --build

Docs and Resources:

Notes:

  • Compose integrates with the same Docker engine — no separate install needed on modern Docker (v20+).

  • Older systems may need the standalone docker-compose binary.

  • For homelabs, Compose makes backups, restores, and migrations easier (just copy the YAML + volumes).

  • You can scale services with one command:

    docker compose up --scale web=3 -d
    

    This runs three copies of the web service (web_1, web_2, web_3) using the same definition from your Compose file. Other containers on the same network can talk to web and Docker will spread requests across them, though exposing ports usually requires a proxy to handle load-balancing.

  • Always pin image tags in Compose files (e.g. postgres:15.2) — don’t rely on latest.

  • Track docker-compose.yml in Git, but exclude volume data — it belongs on disk, not in version control.


Docker Desktop

For macOS and Windows, Docker Desktop is the simplest way to get started.
It bundles the Docker Engine, CLI, and a GUI dashboard in one installer.

a screenshot of Docker Desktop displaying the User Interface
Docker Desktop is an easy but resource heavy way of managing your containers for those who prefer a GUI.

Notes:

  • Docker Desktop is perfect for local dev/testing.
  • On Linux, install Docker Engine directly instead (Desktop isn’t needed).
  • Resource usage can be heavy — tweak CPU/RAM under Preferences → Resources.
  • Once comfortable, you can move stacks to a server (like gir.darkstar.home) using docker compose or Portainer.

Portainer Setup

Portainer is a lightweight web UI that makes Docker approachable — it’s the first thing I install whenever I set up Docker. Think of it as a control panel: manage containers, stacks, networks, and volumes without memorizing CLI flags.

a screenshot of the Portainer User Interface
Portainer is a great web UI for managing your Docker environments

Install Portainer CE:

docker volume create portainer_data
docker run -d \
  -p 8000:8000 -p 9000:9000 -p 9443:9443 \
  --name=portainer \
  --restart=always \
  -v /var/run/docker.sock:/var/run/docker.sock \
  -v portainer_data:/data \
  portainer/portainer-ce:latest

First Steps:

  1. Visit https://<host-ip>:9443 in a browser.
  2. Create the admin user.
  3. Connect to the local environment.

Deploying a Stack:

  • In Portainer → StacksAdd Stack.
  • Paste your docker-compose.yml (e.g. the nginx + postgres example).
  • Hit Deploy the stack.
screenshot of portainer stack editor
You can compose and edit the yml for your stack in a nice, polished ui

screenshot of portainer deployment in progress button
Click the deploy button and it either reports an error or builds the compose stack

screenshot of portainers details view of our new stack
Once deployed, you can manage lifecycle, view logs, exec a shell, etc.

Pro-Tip: Portainer shows the YAML it generates. Copy that into Git (like ~/codelab/infra-stacks/) to version your setup.

Links:


Dockerfiles and making your own containers

It’s one thing to understand that “a container is a little computer inside your computer” — but it really clicks once you build one yourself. You don’t need Kubernetes or a whole stack of services to see how it works. It can be as simple as combining a tiny script and a Dockerfile.

The idea is simple:

  • A script or program you want to run.
  • A Dockerfile — think of it as a lightweight provisioning script. Each line makes a small, understandable change to a base OS image:
    • pick a starting point (FROM python:3.12-slim)
    • copy in files (COPY smurfify.py .)
    • install dependencies (RUN pip install -r requirements.txt or apt-get install curl)
    • set the default command (ENTRYPOINT [...])
  • docker build to turn that recipe into an image.
  • docker run to spin up a container from the image.

That’s it: script → image → container. Once you’ve seen that loop, the rest of Docker makes sense.

The beauty of this model is that you ship a minimal system tailored to your code, instead of hoping it behaves on whatever messy production environment it lands in. The old excuse “it works on my machine” becomes a feature — because with Docker you’re packaging your machine with the code. Unlike a full virtual machine, containers don’t need their own kernel or OS image; they just add the few layers your app requires. That makes them lightweight, portable, and consistent no matter where they run.

If you’d like to see this process step by step, I put together a companion guide where we take a small script (smurfify.py) and wrap it in Docker:

🧾 Dockerize a Python Script


📚 Essential Documentation & Resources

Official Documentation

Best Practices Guides

Community Resources


Conclusion

a pixel art treasure chest
If you've made it this far, you deserve some loot and XP

I hope you’ve found this little Docker Grimoire useful. With a few concepts, the commands and some examples, you’ll be dockering with the best of them in no time.

If you have any tips, tricks, or feedback please feel free to reach out: feedback@adminjitsu.com