What Docker is and why regular people should care

There is a phrase every developer has heard at least once: “it works on my machine.” Someone wrote code, it runs perfectly on their laptop, but as soon as you try to launch the same thing on another machine — nothing works. Wrong library version, missing system dependency, unset environment variable. Familiar?

Docker fixes this in a direct way: it packages an application together with everything it needs to run into a single, self-contained bundle. That bundle runs identically on any machine with Docker installed — a developer’s laptop, a test server, or a cloud VM.

The problem: why “it works on my machine” is a classic

Before containers existed, there were two main options:

Option 1: install everything directly on the server. You put Node.js 18.7, PostgreSQL 15.3, a bunch of libraries and dependencies. It works. But when you try to replicate this on another server, suddenly you discover a different Debian version and libpq5 has a different hash. Cue: “but why isn’t it working.”

Option 2: virtual machine. Every application gets its own full virtual machine with its own operating system. It works, but each VM is several gigabytes, takes minutes to boot, and costs real money.

Docker offers a third path: a container gets its own filesystem and environment, but shares the host kernel. That’s much lighter than a VM but isolated enough that services don’t step on each other’s toes.

How this works in plain language

Docker uses a few core concepts, and they’re fairly logical if you map them correctly.

An image is like an installer

An image is a ready-made bundle with the application, libraries, configuration, and run instructions. You don’t “install” it in the traditional sense. An image just exists, and you start containers from it.

Official images exist for everything — Nginx, PostgreSQL, Node.js, Python. They live on Docker Hub, a public catalog that works like an app store.

A container is a running application

A container is a specific running instance of an image. You can run the same image ten times and get ten independent containers.

Simple analogy: the image is the .exe file, the container is the running program. You can open the same program in multiple windows — each window works independently.

Dockerfile is a recipe for building your own image

If there’s no ready-made image, you can build one. A Dockerfile is a text file with instructions:

FROM node:20
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "start"]

It’s literally a recipe: “take Node.js 20, create the /app directory, copy dependency files, install them, copy the rest, open port 3000, and run the application.”

Why this matters in the real world

1. No more “it works on mine but not on the server”

If the app works in a container, it will work anywhere Docker is installed. Period. You define the Node.js version in the Dockerfile — the server’s system administrator doesn’t get a vote.

2. Safe experimentation

Need to try a new version of PostgreSQL? Pull the image and run it — it won’t touch your existing database on the host. When the test is done, delete the container and all the mess goes with it.

3. Multiple services, one config file

Webserver + database + cache + message queue? docker compose up starts everything from a single file. For home servers, pet projects, and demos, it’s one of the most convenient tools that exists.

4. Isolation: if one service crashes, the others keep running

Every container is isolated. If one service has a memory leak, the other containers don’t even notice. That’s better than everything running on the same server and dragging each other down.

5. Horizontal scaling

If traffic spikes, you just spin up more copies of the same container. No need to provision a new server, copy configs, and hope for the best.

A practical example: running Nginx in 30 seconds

To see what this looks like on the ground, let’s start a basic webserver:

docker run -d --name myweb -p 8080:80 nginx

Breaking it down:

Open http://localhost:8080 — and you see the Nginx welcome page. Done. No package installation, no system config. One line and a webserver is running.

docker compose: when you have multiple services

One container is fine. But real projects usually have 2-5 services. That’s what docker compose is for:

services:
  web:
    image: nginx:latest
    ports:
      - "8080:80"
    volumes:
      - ./html:/usr/share/nginx/html
  db:
    image: postgres:17
    environment:
      POSTGRES_PASSWORD: mysecretpassword
    volumes:
      - pgdata:/var/lib/postgresql/data

volumes:
  pgdata:

Start everything with one command:

docker compose up -d

Two services here: Nginx and PostgreSQL. Nginx serves files from the local html directory, and the database stores data in a volume named pgdata.

How to verify everything is working

A handful of commands you’ll need immediately:

Common beginner mistakes

1. Storing data inside the container

Containers are temporary. Delete the container and the data inside it disappears. For data, always use a volume or mount a local directory. Otherwise one morning you’ll wake up to an empty database.

2. Running everything as root

Many official images run processes as root inside the container. For testing, that’s fine. For production, it’s begging for trouble. Look for non-root variants or set that up yourself in the Dockerfile.

3. Forgetting to update images

nginx:latest today might be different in a month — or it might not be, if you never pull updates. For production it’s better to pin versions: nginx:1.27, postgres:17.2. That way deployments are predictable.

4. Mounting too many files

When you mount an entire project directory via volumes, there are risks: files might conflict with what the container expects, and permissions may differ. Mount only what you actually need.

5. Ignoring resource limits

An unlimited container can consume all host memory. Add mem_limit and cpus in compose if the host isn’t powerful or runs other services alongside.

Conclusion / action plan

Docker isn’t complicated. It’s a way to package an app with everything it needs and guarantee it runs identically everywhere.

Here’s what to do next:

  1. Install Docker on your computer or server.
  2. Run docker run -d -p 8080:80 nginx — and verify you can see the Nginx page.
  3. Create a docker-compose.yml with two services (say, Nginx + PostgreSQL) and launch it with docker compose up -d.
  4. Play with stopping, removing, and restarting — understand the difference between stop and down.
  5. Write your own Dockerfile for a simple app (even if it’s just static HTML).

Next time someone says “it works on my machine,” you’ll already know what’s missing from that picture.