Skip to main content

Junior Docker Interview Questions

Curated Junior-level Docker interview questions for developers targeting junior positions. 35 questions available.

Last updated:

Docker Interview Questions & Answers

Skip to Questions

Welcome to our comprehensive collection of Docker interview questions and answers. This page contains expertly curated interview questions covering all aspects of Docker, from fundamental concepts to advanced topics. Whether you're preparing for an entry-level position or a senior role, you'll find questions tailored to your experience level.

Our Docker interview questions are designed to help you:

  • Understand core concepts and best practices in Docker
  • Prepare for technical interviews at all experience levels
  • Master both theoretical knowledge and practical application
  • Build confidence for your next Docker interview

Each question includes detailed answers and explanations to help you understand not just what the answer is, but why it's correct. We cover topics ranging from basic Docker concepts to advanced scenarios that you might encounter in senior-level interviews.

Use the filters below to find questions by difficulty level (Entry, Junior, Mid, Senior, Expert) or focus specifically on code challenges. Each question is carefully crafted to reflect real-world interview scenarios you'll encounter at top tech companies, startups, and MNCs.

Questions

35 questions
Q1:

What is the Docker daemon and what role does it play?

Junior

Answer

The Docker daemon (dockerd) manages all Docker objects—containers, images, volumes, networks. It listens to the Docker API and executes client commands like building, running, or stopping containers.
Quick Summary: The Docker daemon (dockerd) is the background service that actually creates and manages containers, images, networks, and volumes. The Docker CLI talks to the daemon via a REST API. The daemon does all the heavy lifting — you just issue commands and it handles the rest.
Q2:

Why is the Docker client and Docker daemon separation useful?

Junior

Answer

This separation allows remote container management, CLI running on a different machine, API-driven automation. Docker becomes a client–server architecture.
Quick Summary: The separation means you can run Docker from remote machines, CI systems, or scripts by pointing the client at a remote daemon. It also means the daemon can run as root (needed for kernel-level operations) while the client runs as a regular user. They communicate over a Unix socket or TCP.
Q3:

What is the difference between a container’s “image layer” and “container layer”?

Junior

Answer

Image layers are read-only and shared among containers. Container layer is a thin writable layer on top where runtime changes occur.
Quick Summary: Image layers are read-only — they're shared across all containers using that image. The container layer is a thin, writable layer added on top specifically for that container. Any writes (new files, modified files) go into the container layer via copy-on-write. The image underneath is never modified.
Q4:

What happens internally when you run docker build?

Junior

Answer

Docker sends build context to daemon, executes each Dockerfile instruction as a new layer, caches layers intelligently, and outputs a final image ID.
Quick Summary: docker build reads the Dockerfile, sends your project folder (build context) to the daemon, then executes each instruction in order. Each instruction creates a new layer. Docker checks its layer cache first — if the layer hasn't changed, it reuses it. The final result is a new image tagged with your name.
Q5:

What does Docker’s copy-on-write mechanism do?

Junior

Answer

It avoids duplicating data. If many containers use the same layer, Docker creates a new copy only when modification occurs—saving disk space.
Quick Summary: Copy-on-write means containers share read-only image layers until they need to modify a file. When a container writes to a file from an image layer, Docker copies that file into the container's writable layer first — then modifies the copy. The original image layer stays unchanged, shared by all containers.
Q6:

Why is docker build . slow when your project folder is huge?

Junior

Answer

Docker sends entire build context to the daemon. Without a .dockerignore, large directories dramatically slow down builds.
Quick Summary: docker build sends your entire project folder (the build context) to the Docker daemon before processing the Dockerfile. If your folder is huge (large node_modules, build artifacts, test datasets), that transfer alone takes seconds or minutes. A .dockerignore file filters out what doesn't need to be sent.
Q7:

What is the purpose of multi-stage builds?

Junior

Answer

They split build and runtime stages to create smaller production images, remove build tools, and speed up CI/CD pipelines.
Quick Summary: Multi-stage builds let you use multiple FROM stages in one Dockerfile. Build tools, compilers, and test frameworks run in early stages; only the final artifact gets copied into a clean, minimal final image. Result: production images that are tiny and contain zero build tooling — smaller, faster, more secure.
Q8:

What is the difference between ENTRYPOINT exec form and shell form?

Junior

Answer

Exec form uses no shell and passes signals properly. Shell form uses /bin/sh -c and makes signal forwarding harder.
Quick Summary: Shell form (ENTRYPOINT cmd arg) runs through /bin/sh -c — the shell interprets it, so signal handling is poor (PID 1 is the shell, not your app). Exec form (ENTRYPOINT ["cmd", "arg"]) runs the process directly as PID 1 — receives signals correctly for graceful shutdown. Always use exec form in production.
Q9:

Why do production Dockerfiles avoid using ADD?

Junior

Answer

ADD auto-extracts archives and supports remote URLs, causing unpredictable builds. COPY is preferred unless extraction is needed.
Quick Summary: ADD can accept remote URLs and auto-extracts tarballs — which sounds useful but adds implicit behavior and is a security risk (fetching from remote URLs during build). COPY is explicit — it only copies files from the local build context. Prefer COPY; use ADD only when you specifically need tarball extraction.
Q10:

What is Docker Hub rate limiting?

Junior

Answer

Unauthenticated users have pull limits. Exceeding limits prevents image pulls. Solutions include login, mirrors, or private registry.
Quick Summary: Docker Hub limits unauthenticated pulls (100/6h) and free account pulls (200/6h). In CI pipelines with many parallel jobs, you can easily hit the limit and get 429 errors. Solutions: authenticate with a Docker Hub account, use a pull-through cache, or mirror images to a private registry.
Q11:

What is Docker Compose and why is it useful?

Junior

Answer

Compose defines multi-container apps via YAML, managing networks, volumes, dependencies, and startup order.
Quick Summary: Docker Compose defines and runs multi-container applications using a YAML file. Instead of manually running multiple docker run commands with all their flags, you declare all services, networks, and volumes in one docker-compose.yml and start everything with docker compose up. Great for local dev and simple deployments.
Q12:

What is the difference between docker-compose up and docker-compose up --build?

Junior

Answer

up uses existing images. up --build forces image rebuild before starting services.
Quick Summary: docker compose up starts existing containers (or creates them if they don't exist) using the current images. docker compose up --build forces Docker to rebuild images before starting — picks up Dockerfile changes. Without --build, code changes in your Dockerfile won't be reflected in running containers.
Q13:

How do Compose networks isolate containers?

Junior

Answer

Each project gets a dedicated bridge network. Containers communicate via service names, keeping environments isolated.
Quick Summary: Compose creates a private virtual network for your project. Services on the same network can reach each other by service name as the hostname (e.g., the web service reaches the DB as "db"). Services on different Compose networks or the host can't reach each other by default — clean isolation out of the box.
Q14:

Why is it a bad practice to store secrets inside images?

Junior

Answer

Images can be extracted or shared, exposing secrets permanently. Secrets cannot be revoked once baked into images.
Quick Summary: Images are shareable and often public (or accessible to anyone with registry access). Baking secrets (API keys, DB passwords) into an image means everyone who pulls it gets the secrets — forever, even after you "delete" them, because they're in the layer history. Use environment variables or secrets managers at runtime instead.
Q15:

What is the concept of image digest?

Junior

Answer

Digest uniquely identifies image content. Unlike tags, digests are immutable and ensure reproducible builds.
Quick Summary: A digest is a SHA256 hash of an image's content — it uniquely and immutably identifies an exact version of an image. Unlike tags (which can be moved), a digest never changes. Pinning by digest (image@sha256:abc123...) guarantees you always pull exactly the same image, even if someone updates the tag.
Q16:

Why is latest tag dangerous in production?

Junior

Answer

latest is mutable; pulling it later may fetch a different version, causing inconsistent environments.
Quick Summary: latest is just a tag — it points to whatever image was last pushed with that tag. It gives no guarantee of stability, version, or compatibility. In production, if you pull latest and the image changed, your deployment behavior changes silently. Always pin to a specific version tag or digest.
Q17:

How does Docker’s overlay filesystem affect performance?

Junior

Answer

OverlayFS adds extra read/write layers. Heavy writes slow performance compared to native filesystems.
Quick Summary: OverlayFS stacks read-only image layers under a writable container layer. File reads go through the layer stack — usually fast. Writes trigger copy-on-write: Docker copies the file from the image layer into the container layer, then modifies it. Frequent writes to many large files in a container can be slower than native disk I/O.
Q18:

What does stateless container mean?

Junior

Answer

A stateless container does not store persistent data. Volumes or external services store data instead.
Quick Summary: A stateless container stores no persistent data internally — all data goes to external volumes, databases, or caches. When the container restarts or is replaced, nothing is lost. Stateless containers are easy to scale (just add more copies), update (replace without migration), and recover (restart instantly).
Q19:

Why should you avoid running multiple processes in one container?

Junior

Answer

One process ensures clean logging, easier monitoring, simple restarts, and better scaling.
Quick Summary: One process per container keeps things focused and clean. Multiple processes in one container means you need a process supervisor (like supervisord), signals get complicated (PID 1 needs to handle them all), logs get mixed, and failure recovery is harder. If one process crashes, you can't restart just that part.
Q20:

What is the difference between scaling at container vs node level?

Junior

Answer

Container-level scaling means more replicas on same node; node-level scaling distributes containers across nodes.
Quick Summary: Container scaling adds more container instances on the same node — uses more CPU/RAM on that host. Node scaling adds more physical or virtual machines to the cluster. Container scaling is fast (seconds); node scaling is slower (minutes). Orchestrators like Kubernetes do both automatically based on load.
Q21:

What is the difference between a named volume and an anonymous volume?

Junior

Answer

Named volumes are persistent and reusable; anonymous volumes are temporary and often lost on container removal.
Quick Summary: Named volumes have an explicit name you give them — they persist after the container is removed and can be reused by name. Anonymous volumes are created without a name — Docker assigns a random ID, and they're harder to find and manage. Named volumes are always the right choice for data you want to keep.
Q22:

What is a healthcheck in Docker?

Junior

Answer

A command that checks container health; orchestrators restart containers if unhealthy.
Quick Summary: A healthcheck is a command Docker runs periodically inside the container to verify it's working correctly. If it fails repeatedly, Docker marks the container as unhealthy. Orchestrators use health status to decide when a container is ready for traffic or needs to be restarted — far smarter than just checking if the process is running.
Q23:

Why do some Docker images use non-root users?

Junior

Answer

Running as root is insecure. Non-root users reduce risk and prevent privilege escalation.
Quick Summary: Running as root inside a container means if an attacker escapes the container (container breakout), they're root on the host — catastrophic. A non-root user limits the blast radius. The process can still read and write its data but can't install system packages, modify critical files, or escalate privileges easily.
Q24:

Explain the difference between docker logs and docker attach.

Junior

Answer

docker logs shows output; docker attach connects to live STDIN/STDOUT and may interfere with the process.
Quick Summary: docker logs reads the container's captured stdout/stderr output — non-interactive, shows historical output, can be filtered by time. docker attach connects your terminal directly to the container's running process I/O — interactive, real-time, but detaching requires Ctrl+P+Q or you'll stop the container. logs is safer for debugging.
Q25:

Why does Docker cache layers but not RUN commands that modify external states?

Junior

Answer

Docker caches based on file changes only. External states do not invalidate cache.
Quick Summary: Docker caches layers based on the instruction and its context at build time. RUN commands that download from the internet (apt-get, curl) are cached based on the instruction text — not the external content. So if an apt package updates, Docker still uses the cached layer. To force a fresh download, break the cache manually.
Q26:

What is an image base layer?

Junior

Answer

The base layer (like ubuntu:22.04) is the foundation layer on which all other layers build.
Quick Summary: The base layer is the foundation of every image — defined by the FROM instruction. It's the first layer, typically a minimal OS (Alpine, Debian, Ubuntu) or a language runtime (node, python, golang). Every other layer in your image builds on top of it. Choosing a good base image is the most impactful size decision.
Q27:

Why is docker cp not preferred for production?

Junior

Answer

Copying files into containers breaks immutability and leads to unpredictable deployments.
Quick Summary: docker cp copies files between a running container and the host. It's a one-time manual operation — not automated, not version controlled, not part of the image build. In production, files should be in the image or in volumes from the start. Using docker cp to "fix" a running container creates snowflake servers and hides real problems.
Q28:

How does Docker handle environment variables?

Junior

Answer

Variables passed with -e or Compose are injected at runtime and override image defaults without being stored in layers.
Quick Summary: Environment variables are passed to containers via -e VAR=value or --env-file. Inside the container they appear as normal env vars that the app reads. They're great for config (database URLs, ports, feature flags) but not for sensitive secrets — they show up in docker inspect and process listings.
Q29:

What is the difference between restarting and recreating a container?

Junior

Answer

Restarting keeps container instance; recreating destroys and creates a new instance with updated config.
Quick Summary: Restarting a container (docker restart) stops and starts the same container — same container ID, same writable layer, same data. Recreating a container (docker rm + docker run) destroys the old container entirely and starts fresh from the image. Recreating picks up image changes; restarting does not.
Q30:

Why do some images include an .env but avoid copying it?

Junior

Answer

.env provides build-time defaults; copying it exposes secrets and breaks environment separation.
Quick Summary: Some images ship with a sample .env file as documentation (shows which variables are expected). But you never COPY .env into the image itself — that would bake secrets into the image. The real .env gets injected at runtime via --env-file, keeping secrets out of the image and the registry.
Q31:

How does Docker’s internal DNS resolve service names?

Junior

Answer

Docker assigns DNS records per network; services resolve each other by name.
Quick Summary: Docker's embedded DNS server (127.0.0.11) answers name lookups inside containers on user-defined networks. When container "web" queries "db", Docker's DNS returns the IP of the container named "db". On the default bridge network this doesn't work — only user-defined networks get DNS resolution by service name.
Q32:

Why is ordering of layers important in Dockerfiles?

Junior

Answer

Lower layers cannot be changed without invalidating upper layers; ordering helps maximize caching.
Quick Summary: Each instruction in a Dockerfile creates a cached layer. If an early layer changes (like COPY package.json), all subsequent layers are invalidated and rebuilt. Put slow, rarely-changing steps (install OS packages) early; put fast, frequently-changing steps (copy your code) late. This keeps rebuilds fast during development.
Q33:

What is the purpose of the WORKDIR instruction?

Junior

Answer

It sets the working directory for subsequent instructions to avoid long paths.
Quick Summary: WORKDIR sets the working directory for all subsequent RUN, CMD, COPY, and ENTRYPOINT instructions in the Dockerfile. If the directory doesn't exist, Docker creates it. Using WORKDIR is cleaner than cd commands inside RUN steps — it's explicit, readable, and applies consistently to all following instructions.
Q34:

Why does docker system prune free so much space?

Junior

Answer

It removes unused images, networks, volumes, and stopped containers that accumulate over time.
Quick Summary: docker system prune removes: stopped containers, dangling images (untagged layers), unused networks, and build cache. All of these accumulate silently during development — especially build cache, which can grow to tens of GBs. Pruning is safe during development; in production, be more targeted (docker image prune, docker volume prune).
Q35:

What happens when two containers expose the same host port?

Junior

Answer

Port conflict occurs; only the first container binds successfully while the second fails.
Quick Summary: Docker won't allow two containers to bind to the same host port — the second container fails to start with a "port already in use" error. Each service needs a unique host port. You can have many containers on the same internal container port (all using port 80 internally) as long as they map to different host ports.

Curated Sets for Docker

No curated sets yet. Group questions into collections from the admin panel to feature them here.

Ready to level up? Start Practice