Intro
In Docker Grimoire we looked at the commands and concepts that make Docker work. But there is another part of the process that deserves special attention: actually creating a Docker container of your own. In this post we’ll take a simple Python script and package it into a full Docker container that is ready to share — covering the ins and outs of the process.

Docker is fantastic for running other people’s code without worrying about requirements, dependencies, or OS quirks — you just docker run
and it works. But the real magic is when you flip that around: you can package your own scripts the same way. By building a container, you freeze your code together with just the environment it needs, so it runs the same on your laptop, your server, or someone else’s machine. No “works on my machine” headaches, no dependency juggling.
Understanding Dockerfiles
A Dockerfile is the blueprint for building an image. Think of it as a provisioning script: each line makes a small, repeatable change to a base OS image. When you run docker build
, Docker executes those steps in order and layers the results into a new image.
The basic loop looks like this:
Dockerfile → docker build → Image → docker run → Container
A few important points:
- Layered builds: each instruction (
FROM
,COPY
,RUN
, etc.) becomes its own cached layer. If you rebuild and nothing in that layer changed, Docker reuses the cached result. - Repeatable and portable: once you have a Dockerfile, you can rebuild the same environment anywhere, as many times as you like.
- Minimal by design: unlike a full VM image, Dockerfiles usually start from a tiny base (like
python:3.12-slim
) and add only what’s needed for your app.
This makes Dockerfiles both transparent (you can read exactly how an image is built) and reproducible (anyone else can build the same image from the same file).
Anatomy of a Dockerfile
Most real-world Dockerfiles use just a handful of instructions:
FROM
— pick a base image to build on (e.g.FROM python:3.12-slim
)WORKDIR
— set the working directory inside the container (WORKDIR /app
)COPY
— bring files from your project into the image (COPY smurfify.py .
)RUN
— run commands to install dependencies (RUN pip install -r requirements.txt
,RUN apt-get update && apt-get install -y curl
)ENTRYPOINT
orCMD
— define what should run by default when the container starts (ENTRYPOINT ["python", "smurfify.py"]
)
💡 Tip: Docker executes instructions top to bottom. Group things that change less often (like installing system packages) near the top so they’re cached, and keep frequently changing code copies (COPY . .
) near the bottom.
Together, these instructions create a transparent, reproducible recipe. You can read a Dockerfile and know exactly how an image was built — no mysteries.
Choosing a Base Image
Every Dockerfile begins with a FROM
line. That single choice sets the foundation for everything else: which OS your container is built on, how big the image will be, and how much work you’ll need to do to get your app running.
Where do these base images come from?
- Docker Hub is the default registry. If you write
FROM python:3.12-slim
, Docker pulls it fromhub.docker.com/library/python
. - You can browse tags on Docker Hub or check the source Dockerfiles (often maintained on GitHub).
- Many projects also publish to GitHub Container Registry (ghcr.io) or Quay.io, but Docker Hub is the most common.
Common base image patterns:
python:X.Y
→ Full Debian-based image. Includes Python and lots of tools. Bigger, but very compatible.python:X.Y-slim
→ Stripped-down Debian variant. Smaller size, still good compatibility. A great default.python:X.Y-alpine
→ Based on Alpine Linux. Tiny, but can cause headaches when Python packages need C libraries.ubuntu
,debian
,alpine
→ Bare OS bases, good if you want to install tools yourself.
💡 Rule of thumb:
- Use slim for most apps — small, reliable, and widely supported.
- Use alpine if size really matters and you’re comfortable fixing build errors.
- Use the full image if you need lots of system packages or want fewer surprises.
Researching Images
Docker doesn’t have an apt show
equivalent — most image research happens on the registry pages themselves. For Docker Hub images, that means checking pages like Python on Docker Hub.
Here’s what to look for: available tags, size comparisons, and the Image Variants section that explains trade-offs. For Python, you’ll see:
python:3.12
(1.02GB) — full Debian base with build toolspython:3.12-slim
(131MB) — stripped down but still compatiblepython:3.12-alpine
(48MB) — tiny Alpine base, but can cause build issues
The same pattern applies to other languages:
node:20
vsnode:20-slim
vsnode:20-alpine
golang:1.21
vsgolang:1.21-alpine
The registry page tells you what each variant includes and when to use it.
Now let’s put this into practice with a real example.

Dockerizing Smurfify
We’ll use my Smurfify script as the example. It’s lighthearted, but it makes a perfect example: no dependencies, easy to test, and fun to run. By the end, you’ll understand the workflow for containerizing any script or small app.
Project Setup
The first step is to create a project directory to hold everything related to the container. On my system, I keep container projects under ~/codelab/containers/
, so let’s make a home for Smurfify:
mkdir -p ~/codelab/containers/smurfify
cd ~/codelab/containers/smurfify
git init
I like to track these builds in Git so I can version changes and keep my Dockerfiles tidy. That means adding a .gitignore
right away so you don’t end up committing build artifacts and temp files. Create a file named .gitignore
with contents like this:
# Ignore Python cruft
__pycache__/
*.pyc
*.pyo
# Ignore Docker build artifacts
*.tar
*.log
# Ignore anything generated at runtime
.env
This keeps the repo focused on just your source and Dockerfile.
Now copy the script into the folder:
cp ~/codelab/bin/smurfify.py .
At this point you should have a clean project directory with Git initialized, a .gitignore
in place, and your script ready to go.
For this example, we’ll base our container on python:3.12-slim
— small enough to be efficient, but big enough to avoid the headaches Alpine images can cause when building Python dependencies.
Writing the Dockerfile
A Dockerfile is the recipe for building an image — each line makes a small, repeatable change to a base OS. For Smurfify we don’t need much: just Python and the script itself.
Create a file called Dockerfile
inside ~/codelab/containers/smurfify/
:
# Start with a minimal Python base image
FROM python:3.12-slim
# Set a working directory inside the container
WORKDIR /app
# If you have Python requirements you would uncomment these lines
# COPY requirements.txt .
# RUN pip install -r requirements.txt
# Copy the script into the image
COPY smurfify.py .
# Run the script by default
ENTRYPOINT ["python", "smurfify.py"]
That’s the entire recipe:
- Start with Python
- Drop in your script
- Define the default command
If your script had external dependencies, you would uncomment the requirements.txt and pip install lines in the example above
Note ENTRYPOINT vs CMD: ENTRYPOINT
locks in your command - arguments from docker run
get appended to it. CMD
is flexible - arguments completely replace it. For scripts that take input (like Smurfify), use ENTRYPOINT
so users can run docker run smurfify "hello world"
and it becomes python smurfify.py "hello world"
. Use CMD
for utility containers where users might want to run different commands entirely.
Now your project tree should look like this:
codelab/containers/smurfify/
├── .git/
├── .gitignore
├── smurfify.py
└── Dockerfile
Building the Image
Now that we’ve written a Dockerfile, the next step is to build it into an image — turning our text recipe into an actual runnable package.
From inside the project directory (~/codelab/containers/smurfify/
), run:
docker build -t smurfify:latest .
Here’s what’s happening:
docker build
→ tells Docker to create an image from a Dockerfile.-t smurfify:latest
→ tags the image with a name (smurfify
) and version (latest
)..
→ sets the build context to the current directory (everything here is available forCOPY
).
Docker will step through the file line by line: pulling the Python base image, creating /app
, copying in smurfify.py
, and wiring up the entrypoint. You’ll see each instruction logged as it runs like this:
└─$ docker build -t smurfify:latest .
[+] Building 0.6s (8/8) FINISHED docker:default
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 282B 0.0s
=> [internal] load metadata for docker.io/library/python:3.12-slim 0.2s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [1/3] FROM docker.io/library/python:3.12-slim@sha256:d67a7b66b989ad6b6d6b10d428dcc5e0bfc3e5f88906e67d490c4d3d 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 33B 0.0s
=> CACHED [2/3] WORKDIR /app 0.0s
=> CACHED [3/3] COPY smurfify.py . 0.0s
=> exporting to image 0.0s
=> => exporting layers 0.0s
=> => writing image sha256:1b0752cd62e6222c9cb5ec2f84e9ad2b625baad4cac8ecb08070682c2e2cab0c 0.0s
=> => naming to docker.io/library/smurfify:latest
💡 Tip: Docker caches layers. If you rebuild after making small edits, only the changed steps rerun — everything else comes from cache. This makes rebuilds much faster once the base layers are downloaded.
Where did the image go?
After the build, you won’t see anything new in your project folder. That’s expected:
- Your project directory only holds the recipe (
Dockerfile
, source code). - The resulting image is stored inside Docker’s local image store (by default under
/var/lib/docker/
on Linux).
To see it, list your local images:
docker images
# or
docker image ls
Example output (I use grep because I have a lot of images):
└─$ docker image ls | grep smurfify
smurfify latest 1b0752cd62e6 About an hour ago 144MB
So think of it this way: your working directory holds the recipe, and Docker’s internal registry holds the finished meal.
Running the Container
With the image built, we can test it out and see it in action.
Basic run with an argument:
docker run smurfify "Help me, Obi-Wan Kenobi, you're my only hope"
# outputs: Smurf me, Obi-Wan Kenobi, you're my only smurf
Or pipe input with STDIN:
echo "All the world's a stage" | docker run -i smurfify
# outputs: All the smurf's a stage
Interactive Mode
Because our script has a REPL mode (it reads from stdin if no arguments are provided), we can drop into it directly using -it
:
docker run -it smurfify
Example session:
smurfify.py 💙 - Type a line to smurf. Ctrl-D (or Ctrl-Z) to quit.
Damn the torpedoes, full speed ahead!
Smurf! the torpedoes, full speed ahead!
Ask not what your country can do for you -- ask what you can do for your country.
Smurf not what your country can do for you -- smurf what you can do for your country.
💡 What -it
means:
-i
→ interactive: keeps STDIN open so the container can accept your input.-t
→ TTY: allocates a pseudo-terminal, so you get proper line editing and prompts.
Together, -it
makes containers behave like a program you can talk to in real time — perfect for scripts like Smurfify.
At this point, you can:
- Run it once with arguments.
- Pipe text into it.
- Use it interactively with
-it
.
That covers the full set of ways you’d normally interact with a CLI tool inside Docker. Of course, not every container is this simple — if you were shipping a whole web server or database, you’d likely be exposing ports, mounting volumes, or running multiple services. But the same core idea applies: image → container → run.
Troubleshooting & Tips

💡 Our example is simple on purpose.
Real projects get messy — different languages, heavier dependencies, weird edge cases. The good news: the workflow doesn’t change. Here are a few common pitfalls that you might run into:
Common Gotchas
-
Image too big?
Try a-slim
or-alpine
variant. Just note Alpine can break builds when native extensions need glibc. -
Container exits immediately?
Containers stop when their command finishes. For scripts, that’s normal — they run and then quit. Use-it
for interactive use, or design a service that keeps running. -
COPY
orADD
not working?
Check your build context. Only files in the same directory (and subdirectories) as yourDockerfile
get included. -
Dependency errors?
Make sure installs happen inside the image:RUN pip install -r requirements.txt
not on your host machine.
-
Builds are slow or images feel bloated?
Add a.dockerignore
next to yourDockerfile
— it works like.gitignore
and keeps junk (.git/
, caches, logs, secrets) out of your build context.
Pro-Tips
- Always pin your base image version (
python:3.12-slim
, notpython:latest
) for reproducibility. - Keep Dockerfiles minimal: smaller images build faster, pull faster, and break less.
- For debugging builds, temporarily add a shell:
then strip it out later.
RUN apt-get update && apt-get install -y vim curl
Links and Stuff
Essential References:
- Dockerfile Reference — The complete command reference for all Dockerfile instructions
- Docker Hub — Browse and search for base images
- Docker CLI Reference — All the
docker
commands you’ll use
Best Practices & Guides:
- Dockerfile Best Practices — Official guidance on writing efficient, maintainable Dockerfiles
- Docker Build Best Practices — Covers incremental build time, image size, maintainability, security and repeatability
- Multi-stage Builds — For keeping production images lean
When Things Go Wrong:
- Build Checks — Validate your build configuration and catch common issues
- Docker Logs — Essential for debugging container problems
Conclusion
That’s the whole workflow: script → Dockerfile → image → container.
This workflow works for any tool you want to containerize. Now when you need to demo that sweet script or share a utility with the world, you’ll have a tool that actually works — no more “but it works on my machine” disasters or impromptu troubleshooting sessions.
Time to practice! Containerize your own scripts and keep the Docker Grimoire handy for reference.
Have a Docker war story? Killer tip? I’d love to hear about it: feedback@adminjitsu.com