Ismat Samadov
  • Tags
  • About

© 2026 Ismat Samadov

RSS
15 min read/2 views

Docker for Solo Developers — The Only Commands You Need

Docker has 200+ commands. You need about 15. The commands, Compose setup, and Dockerfile patterns that cover 99% of solo dev work.

DockerDevOpsWeb DevelopmentBackend

Related Articles

Kafka Is Overkill for 90% of Teams

13 min read

SLOs Changed How We Ship Software — Error Budgets, Burn Rates, and Why 99.99% Uptime Is a Lie

15 min read

OWASP Top 10 for LLM Applications: The Attacks Your AI App Isn't Ready For

15 min read

Enjoyed this article?

Get new posts delivered to your inbox. No spam, unsubscribe anytime.

On this page

  • The 15 Commands That Actually Matter
  • 1. `docker run`
  • 2. `docker ps`
  • 3. `docker stop` and `docker start`
  • 4. `docker rm`
  • 5. `docker logs`
  • 6. `docker exec`
  • 7. `docker build`
  • 8. `docker images`
  • 9. `docker rmi`
  • 10. `docker pull`
  • 11. `docker compose up`
  • 12. `docker compose down`
  • 13. `docker compose logs`
  • 14. `docker system prune`
  • 15. `docker volume ls` and `docker volume rm`
  • Docker Compose for Local Dev
  • The One Dockerfile You Need
  • Image Size Matters (A Lot)
  • Docker Desktop Alternatives
  • Common Mistakes I've Made (So You Don't Have To)
  • My Actual Daily Workflow
  • What I Actually Think
  • Sources

Docker has over 200 commands. I use about 15.

I've been shipping side projects, client work, and my own SaaS apps with Docker for a few years now. I've never once needed docker manifest, docker trust, or docker context. I don't know what half the flags on docker buildx do, and I'm completely fine with that.

Here's the thing nobody tells you when you're learning Docker: most of those commands exist for platform teams managing hundreds of containers across clusters. If you're a solo developer running a Next.js app, a Postgres database, and maybe Redis, you need a fraction of what Docker offers. A small fraction.

This article is the one I wish I'd had when I started. The 15 commands that actually matter. The one Dockerfile pattern that covers 90% of use cases. The one compose.yaml that gets your full stack running locally in under a minute. No Kubernetes. No Swarm. No orchestration. Just Docker, doing what Docker does best: making your development environment reproducible and your deployments boring.

Docker's adoption numbers back this up. According to a survey of 4,500+ developers, Docker usage among professional developers jumped from 54% to 71.1% — a 17-point increase that made it the fastest-growing technology in the 2025 Stack Overflow Developer Survey. Container adoption in the IT industry hit 92%, up from 80% in 2024. Even developers who are still learning to code: 52.5% already use Docker.

Containers aren't a nice-to-have anymore. They're the default. And yet only 30% of developers outside the IT industry use containers. If you're in that 70% and you've been putting off Docker because it looks complicated, this article is for you. It's not complicated. You just need fewer commands than you think.


The 15 Commands That Actually Matter

I'm going to go through these in the order you'll actually use them. Not alphabetical. Not by category. By frequency.

1. docker run

This is the one. If you only learn one command, make it this:

docker run -d -p 5432:5432 -e POSTGRES_PASSWORD=secret --name mydb postgres:16

That starts a Postgres 16 database in the background (-d), maps port 5432 on your machine to port 5432 in the container, sets the password to "secret", and names the container "mydb".

Breakdown of the flags you'll actually use:

  • -d — run in the background (detached mode)
  • -p HOST:CONTAINER — map a port
  • -e KEY=VALUE — set an environment variable
  • --name — give the container a name so you can reference it later
  • -v HOST_PATH:CONTAINER_PATH — mount a volume (more on this later)
  • --rm — automatically remove the container when it stops

2. docker ps

See what's running:

docker ps

Add -a to see stopped containers too:

docker ps -a

I run this dozens of times a day. It's the ls of Docker.

3. docker stop and docker start

docker stop mydb
docker start mydb

Stop and start containers by name. Your data persists between stops (if you're using volumes).

4. docker rm

Delete a stopped container:

docker rm mydb

Force-remove a running container:

docker rm -f mydb

5. docker logs

docker logs mydb

See what a container is printing to stdout. Add -f to follow the logs in real time (like tail -f):

docker logs -f mydb

Add --tail 50 to see only the last 50 lines:

docker logs --tail 50 mydb

6. docker exec

Run a command inside a running container:

docker exec -it mydb psql -U postgres

The -it flags give you an interactive terminal. This is how you get a shell inside a container:

docker exec -it mydb bash

7. docker build

Build an image from a Dockerfile:

docker build -t myapp:latest .

The -t flag tags the image with a name and version. The . at the end is the build context (current directory).

8. docker images

List all images on your machine:

docker images

You'll be surprised how many images pile up over time. Each one takes disk space.

9. docker rmi

Remove an image:

docker rmi myapp:latest

10. docker pull

Download an image without running it:

docker pull node:20-alpine

docker run does this automatically if the image isn't local, but sometimes you want to pre-pull images before going offline.

11. docker compose up

Start all services defined in your compose.yaml:

docker compose up -d

This is the one you'll use 10x more than everything else combined. I'll cover this in detail in the Compose section.

12. docker compose down

Stop and remove everything:

docker compose down

Add -v to also delete volumes (careful — this deletes data):

docker compose down -v

13. docker compose logs

docker compose logs -f

Follows logs from all services. Add a service name to filter:

docker compose logs -f api

14. docker system prune

Clean up everything you're not using:

docker system prune -a

This removes stopped containers, unused networks, dangling images, and build cache. On my machine, this regularly frees 10-20 GB. Run it when your disk starts filling up.

15. docker volume ls and docker volume rm

docker volume ls
docker volume rm mydb_data

Volumes are where your container data lives. If you delete a Postgres container but keep the volume, your data survives. If you delete the volume, it's gone. Know which volumes matter.

That's it. Fifteen commands. These cover roughly 99% of what I do with Docker as a solo developer. I haven't touched docker network, docker save, docker load, docker inspect, docker tag, or any of the other 190+ commands. They exist for good reasons. I just don't need them in my workflow.


Docker Compose for Local Dev

If docker run is the first command you learn, docker compose up should be the second. Compose lets you define your entire development stack in a single YAML file and start everything with one command.

Here's the compose.yaml I use for most of my projects. This is a Next.js app with Postgres and Redis:

services:
  app:
    build:
      context: .
      dockerfile: Dockerfile
    ports:
      - "3000:3000"
    environment:
      - DATABASE_URL=postgresql://postgres:secret@db:5432/myapp
      - REDIS_URL=redis://redis:6379
      - NODE_ENV=development
    volumes:
      - .:/app
      - /app/node_modules
      - /app/.next
    depends_on:
      db:
        condition: service_healthy
      redis:
        condition: service_started
    restart: unless-stopped

  db:
    image: postgres:16-alpine
    ports:
      - "5432:5432"
    environment:
      - POSTGRES_DB=myapp
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=secret
    volumes:
      - pgdata:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 5s
      timeout: 5s
      retries: 5

  redis:
    image: redis:7-alpine
    ports:
      - "6379:6379"
    volumes:
      - redisdata:/data

volumes:
  pgdata:
  redisdata:

Let me walk through the important parts.

Services talk by name, not localhost. Inside the Docker network, your app reaches Postgres at db:5432, not localhost:5432. Redis is at redis:6379. This is a common source of confusion for people new to Compose. The service name in your YAML file becomes the hostname.

Healthchecks prevent race conditions. The depends_on with condition: service_healthy means your app won't start until Postgres is actually accepting connections. Without a healthcheck, depends_on only waits for the container to start, not for the service inside it to be ready. Your app would try to connect to Postgres while it's still initializing and crash. I learned this the hard way.

Volume mounts for hot reload. The volumes: [".:/app"] mount maps your local source code into the container. When you edit a file, the change appears inside the container instantly. This is how you get hot reload working inside Docker. The /app/node_modules and /app/.next lines are anonymous volumes that prevent your local node_modules from overwriting the container's node_modules.

Named volumes for data persistence. The pgdata and redisdata volumes at the bottom persist your database data between container restarts. Run docker compose down and your data survives. Run docker compose down -v and it's gone. Remember the difference.

The .env pattern. For team projects, I keep a .env.example file in the repo with placeholder values. Each team member copies it to .env and fills in their own values. The compose.yaml can reference .env automatically:

services:
  db:
    environment:
      - POSTGRES_PASSWORD=${DB_PASSWORD}

Start everything:

docker compose up -d

Tear everything down:

docker compose down

Rebuild after changing the Dockerfile:

docker compose up -d --build

That's the entire local development workflow. One YAML file. One command. Every new developer on your project runs docker compose up -d and has a working environment in under a minute.


The One Dockerfile You Need

Most solo developers need one Dockerfile pattern: a multi-stage build. Here's the one I use for Node.js / Next.js projects:

# Stage 1: Install dependencies
FROM node:20-alpine AS deps
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci --only=production

# Stage 2: Build the application
FROM node:20-alpine AS builder
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm ci
COPY . .
RUN npm run build

# Stage 3: Production image
FROM node:20-alpine AS runner
WORKDIR /app

ENV NODE_ENV=production

# Create non-root user
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs

# Copy only what we need
COPY --from=deps /app/node_modules ./node_modules
COPY --from=builder /app/.next ./.next
COPY --from=builder /app/public ./public
COPY --from=builder /app/package.json ./package.json

USER nextjs
EXPOSE 3000
CMD ["npm", "start"]

Why three stages? Because each stage starts from a clean base image. The final image only contains what you COPY --from into it. Your dev dependencies, source code, build tools — none of that ships to production.

Multi-stage builds reduce image sizes by 70-90%. A typical Node.js app goes from 1GB down to 150MB. That's a real difference when you're pushing images to a registry or pulling them on deploy.

Here's a Python equivalent for a FastAPI app:

# Stage 1: Build
FROM python:3.12-alpine AS builder
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir --prefix=/install -r requirements.txt

# Stage 2: Production
FROM python:3.12-alpine AS runner
WORKDIR /app
COPY --from=builder /install /usr/local
COPY . .

RUN adduser --system --uid 1001 appuser
USER appuser
EXPOSE 8000
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"]

A few things I always do in every Dockerfile:

Use Alpine base images. The difference is massive, and I'll show the numbers in a moment.

Run as a non-root user. Those adduser / USER lines mean your app doesn't run as root inside the container. This is a basic security practice and it costs you two lines.

Use npm ci instead of npm install. npm ci installs exactly what's in your lock file. npm install can update the lock file. In a Dockerfile, you want deterministic builds.

Copy dependency files first, then everything else. Docker caches each layer. If your package.json hasn't changed, Docker reuses the cached npm ci layer and skips the install entirely. This makes rebuilds much faster.


Image Size Matters (A Lot)

Image size affects three things: build time, push/pull time, and disk usage. For solo developers running on limited VPS instances or free-tier cloud platforms, every megabyte counts.

The base image you choose is the single biggest factor. Here's what the numbers look like:

ImageSize
node:20~900 MB
node:20-slim~200 MB
node:20-alpine~170 MB
python:3.12~1.0 GB
python:3.12-slim~150 MB
python:3.12-alpine~50 MB
alpine (bare)~5 MB
ubuntu:22.04~77 MB
debian:bookworm-slim~74 MB

Look at the Python numbers. python:3.12 is 1 GB. python:3.12-alpine is 50 MB. That's a 20x difference from just changing one word in your FROM line.

The same pattern holds for Node.js: node:18 at 900MB drops to 170MB with Alpine.

Here's a quick optimization checklist:

  1. Start with Alpine. It works for 90% of use cases. Switch to slim only if you hit compatibility issues with musl libc
  2. Use multi-stage builds. Don't ship your build tools to production
  3. Add a .dockerignore file. At minimum, ignore node_modules, .git, .next, and .env
  4. Combine RUN commands. Each RUN creates a layer. Fewer layers = smaller image
  5. Use --no-cache-dir with pip. Saves space by not caching downloaded packages

Here's a solid .dockerignore:

node_modules
.next
.git
.env
.env.local
*.md
.DS_Store
Dockerfile
docker-compose.yml
.dockerignore

Docker Desktop Alternatives

Docker Desktop works fine. I used it for years. But it's heavy. It uses 2GB+ of RAM just sitting idle. On a 16GB laptop, that's significant.

If you're on a Mac, there are some compelling alternatives. If you're on Linux, you probably don't need Docker Desktop at all since Docker Engine runs natively.

ToolPlatformRAM (Idle)SpeedLicenseCost
Docker DesktopMac, Windows, Linux~2 GB+BaselineProprietaryFree (personal) / $5-24/mo (commercial)
OrbStackMac only~400 MB10x faster container startsProprietaryFree (personal)
ColimaMac, Linux~400 MBFastMITFree
PodmanMac, Linux, Windows~300 MBComparableApache 2.0Free
Rancher DesktopMac, Windows, Linux~1 GBComparableApache 2.0Free

My short take on each:

OrbStack is the one I'd pick on Mac. It starts containers 10x faster than Docker Desktop, uses a fraction of the RAM, and the interface is cleaner. It's free for personal use.

Colima is the open-source option for Mac/Linux. It runs Docker inside a lightweight Lima VM. Around 400MB RAM idle versus Docker Desktop's 2GB+. If you don't want any proprietary software in your stack, Colima is the answer.

Podman is interesting because it's daemonless and rootless. There's no background service eating resources when you're not running containers. The CLI is 100% Docker-compatible — you can literally alias docker to podman and everything works. Red Hat backs it.

Rancher Desktop is the corporate-friendly option. Apache 2.0 license means no licensing headaches for commercial use. It bundles Kubernetes if you need it, but you can turn that off.

On Windows, your options are more limited. Docker Desktop or Podman are your best bets. WSL2-based tools work but add another layer of complexity.

For most solo developers: if you're on Mac, try OrbStack. If you're on Linux, just use Docker Engine directly. If you're on Windows, Docker Desktop is fine. Don't overthink this part.


Common Mistakes I've Made (So You Don't Have To)

Running everything as root in the container. Every Dockerfile example on the internet starts with FROM node:20 and never adds a non-root user. This means your app runs as root. If someone exploits a vulnerability in your app, they have root access to the container. Add a non-root user. It's two lines.

Forgetting .dockerignore. Without a .dockerignore, docker build copies your entire project directory into the build context. That includes node_modules (which could be 500 MB), your .git directory (which could be even larger), and your .env files (which contain secrets). Always have a .dockerignore.

Using latest tags in production. FROM node:latest means your build might use Node 18 today and Node 22 tomorrow. Pin your versions: FROM node:20-alpine. This applies to database images too. postgres:16-alpine, not postgres:latest.

Not using healthchecks with depends_on. As I mentioned earlier, depends_on without a healthcheck only waits for the container to start. Your database container can be "started" but Postgres might still be initializing. Your app connects, gets a "connection refused" error, and crashes. Always use healthchecks for databases.

Storing data without volumes. If you run docker rm mydb, everything inside that container is gone. Including your database files. If you didn't mount a volume, your data is gone forever. Always use named volumes for any data you care about.

Installing dev dependencies in production images. Your production image doesn't need jest, eslint, or prettier. Use npm ci --only=production in your production stage. Or better yet, use multi-stage builds where the final stage only copies the compiled output.

Not cleaning up. Docker images, containers, and volumes pile up silently. I've seen developers lose 50+ GB of disk space to Docker without realizing it. Run docker system prune -a periodically. Or set a cron job. Your SSD will thank you.

Putting secrets in the Dockerfile. Never do this:

# WRONG - secret is baked into the image layer
ENV API_KEY=sk-abc123

Anyone who pulls your image can extract that key. Pass secrets at runtime via -e flags or .env files in your compose.yaml.

Ignoring build cache. If you copy your entire source code before installing dependencies, Docker invalidates the cache on every build:

# SLOW - cache busts on every code change
COPY . .
RUN npm ci
# FAST - only reinstalls when package.json changes
COPY package.json package-lock.json ./
RUN npm ci
COPY . .

The order matters. Put things that change rarely (dependency installs) before things that change often (source code).


My Actual Daily Workflow

Here's what a typical day looks like with Docker:

Morning: docker compose up -d. Everything starts. Database has my data from yesterday because of named volumes. I open my editor and write code. Hot reload works because of volume mounts.

During development: If something looks wrong, docker compose logs -f api to check the logs. If I need to poke at the database, docker exec -it myproject-db-1 psql -U postgres. If I change a dependency, docker compose up -d --build to rebuild the image.

End of day: I either leave everything running or docker compose stop to free up resources. Not docker compose down — that removes the containers. stop just pauses them.

Once a week: docker system prune -a to clean up accumulated junk.

When deploying: docker build -t myapp:v1.2.3 . and push the image to a registry. Or let my CI/CD pipeline do it. The Dockerfile is the same one I use locally, so I know it works.

That's it. No fancy orchestration. No Kubernetes. No service mesh. Just Docker and Compose, doing their job without drama.


What I Actually Think

Here's the unpopular opinion: Docker is overkill for some projects and that's perfectly fine.

If you're building a static site with Astro or Hugo, you don't need Docker. Deploy it to Vercel or Netlify and move on. If your entire backend is serverless functions on Cloudflare Workers, Docker adds nothing. If you're writing a Python script that runs once a day, a virtualenv is simpler.

Docker shines when:

  • Your app needs specific services (Postgres, Redis, Elasticsearch, etc.)
  • You work across multiple machines and need identical environments
  • You're deploying to a VPS or any container-based platform
  • You want new team members to be productive in minutes instead of hours
  • You're tired of "it works on my machine" conversations

For solo developers, the biggest win is local development. docker compose up -d gives you a complete environment that's identical every time. No more installing Postgres natively. No more conflicting Python versions. No more "I upgraded Node and now everything is broken."

The second biggest win is deployments. If it runs in your local Docker container, it'll run on the server. The Dockerfile is your deployment configuration. There's nothing else to configure, nothing else to debug.

But here's the thing — you don't need to containerize everything. I run my Next.js frontend on Vercel (no Docker) and my background workers in Docker on a $5 VPS. I use Docker for what it's good at and skip it where it adds friction.

The Docker community tends to make everything sound more complicated than it is. Build images, run containers, use Compose for multi-service setups. That's 95% of it. The remaining 5% — custom networks, multi-architecture builds, Docker Swarm, container orchestration — those solve problems you probably don't have yet.

Start with docker compose up -d. Build from there only when you hit a wall. Most solo developers never do.


Sources

  1. 2025 Docker State of Application Development Survey — Docker Blog
  2. 2025 Stack Overflow Developer Survey — Technology Section
  3. Docker Desktop Alternatives (2025 Comparison) — BetterStack
  4. Docker Desktop Alternatives 2025 — fsck.sh
  5. Docker Desktop Alternatives — Portainer
  6. Reducing Docker Image Size — BetterStack
  7. Docker Reduce Image Size — OneUptime
  8. Node.js Docker Optimization 2025 — Markaicode
  9. Local Development with Docker Compose — Heroku
  10. Docker Compose Local Development — OneUptime