Skip to main content

Mid Docker Interview Questions

Curated Mid-level Docker interview questions for developers targeting mid positions. 30 questions available.

Last updated:

Docker Interview Questions & Answers

Skip to Questions

Welcome to our comprehensive collection of Docker interview questions and answers. This page contains expertly curated interview questions covering all aspects of Docker, from fundamental concepts to advanced topics. Whether you're preparing for an entry-level position or a senior role, you'll find questions tailored to your experience level.

Our Docker interview questions are designed to help you:

  • Understand core concepts and best practices in Docker
  • Prepare for technical interviews at all experience levels
  • Master both theoretical knowledge and practical application
  • Build confidence for your next Docker interview

Each question includes detailed answers and explanations to help you understand not just what the answer is, but why it's correct. We cover topics ranging from basic Docker concepts to advanced scenarios that you might encounter in senior-level interviews.

Use the filters below to find questions by difficulty level (Entry, Junior, Mid, Senior, Expert) or focus specifically on code challenges. Each question is carefully crafted to reflect real-world interview scenarios you'll encounter at top tech companies, startups, and MNCs.

Questions

30 questions
Q1:

How does Docker internally leverage Linux namespaces to isolate containers?

Mid

Answer

Docker uses namespaces for PID, NET, IPC, UTS, and MNT. Each container sees its own isolated OS view.
Quick Summary: Linux namespaces are kernel features that partition global resources into isolated views. Docker uses: PID namespace (container sees its own process tree), NET namespace (its own network stack), MNT namespace (its own filesystem view), UTS (its own hostname), IPC (its own inter-process communication). Each container gets a fresh set.
Q2:

How does cgroups limit container resources?

Mid

Answer

cgroups enforce limits on CPU, RAM, I/O, and PIDs. Exceeding limits causes throttling or OOM kills.
Quick Summary: cgroups (control groups) are a Linux kernel feature that limits and monitors resource usage. Docker uses them to cap how much CPU, RAM, I/O, and network a container can consume. A container with --memory=512m cannot use more than 512MB of RAM — the kernel enforces it, not the app itself.
Q3:

Why is OverlayFS used as Docker’s storage backend on Linux?

Mid

Answer

OverlayFS merges read-only image layers with a writable upper layer, reducing duplication and speeding up container creation.
Quick Summary: OverlayFS is a union filesystem that stacks layers on top of each other — perfect for Docker's layered image model. The lower directory (image layers) is read-only; the upper directory (container layer) is writable. Reads check upper first, then lower. OverlayFS is fast, well-supported in the Linux kernel, and Docker's default storage driver on most systems.
Q4:

How do orphaned volumes accumulate if not managed properly?

Mid

Answer

Volumes persist after container deletion, accumulating unused data and consuming disk space.
Quick Summary: When you create a volume without naming it, Docker assigns a random ID. When you delete the container, the volume stays but becomes "orphaned" — no container references it, but it takes up disk space. Over time, hundreds of orphaned volumes accumulate. docker volume prune removes all that are unreferenced.
Q5:

Why is docker exec not recommended for critical automation?

Mid

Answer

If a container restarts, exec commands break. Exec depends on runtime state and is nondeterministic.
Quick Summary: docker exec attaches a new process to a running container — great for one-off debugging. But it's interactive and ad-hoc, not reliable for automation. If the container restarts, your exec session is gone. For automation, bake the logic into the container startup or use proper orchestration rather than shelling in.
Q6:

How does Docker handle DNS resolution for containers on user-defined networks?

Mid

Answer

Docker embeds a DNS server inside each network; containers register by name for service discovery.
Quick Summary: Docker's embedded DNS server (127.0.0.11) resolves container names to IPs on user-defined networks. When a container's IP changes (on restart), Docker updates the DNS entry automatically. This works only on user-defined networks — containers on the default bridge use IPs directly, not names.
Q7:

Explain how Docker checks container health through health status propagation.

Mid

Answer

Healthchecks run periodically and transition through starting, healthy, and unhealthy states used by orchestrators.
Quick Summary: Docker's healthcheck runs a command periodically (default every 30s). After a configurable number of consecutive failures (default 3), the container's health status changes to "unhealthy." Orchestrators like Kubernetes or Docker Swarm use this to stop routing traffic to it and trigger a restart or replacement.
Q8:

Why does Docker build cache break when COPY or ADD instructions change?

Mid

Answer

COPY/ADD track file checksums; any change invalidates following layers, causing rebuild.
Quick Summary: Docker layer caching is based on the content of the instruction and what was copied at that point. If you change any file that COPY or ADD includes, the hash of that instruction changes, invalidating that layer's cache and every layer after it. Even a timestamp change in a copied file busts the cache for the rest of the build.
Q9:

What issues arise when storing logs inside container filesystem instead of stdout/stderr?

Mid

Answer

Logs fill the writable layer, degrade performance, and are lost when containers are deleted.
Quick Summary: Storing logs in the container filesystem fills the container's writable layer — you can't easily ship them, rotate them, or query them from outside. Logging to stdout/stderr lets Docker capture them via the log driver and forward to systems like CloudWatch, ELK, or Splunk. Containers should be observable, not self-archiving.
Q10:

What is the significance of the container’s init process (PID 1)?

Mid

Answer

PID 1 does not forward signals properly; using tini or an init wrapper is recommended.
Quick Summary: PID 1 is the init process — it must handle signals properly and reap zombie child processes. In Docker, your app IS PID 1. Many apps aren't written to handle SIGTERM correctly as PID 1, causing containers to ignore graceful shutdown signals. Solutions: use ENTRYPOINT exec form, or prepend your command with tini (a minimal init).
Q11:

Why is it dangerous to use containers with host networking mode?

Mid

Answer

Host networking bypasses isolation and exposes host ports, increasing attack surface.
Quick Summary: Host networking (--network=host) puts the container directly on the host's network stack — no isolation, no NAT, the container sees and can reach everything the host can. If the container is compromised, the attacker has full network access to the host environment. Only justifiable for very specific high-performance use cases.
Q12:

What is the effect of using docker run --privileged?

Mid

Answer

This grants full host permissions and device access, defeating container security.
Quick Summary: --privileged gives the container nearly all Linux capabilities — it can mount filesystems, load kernel modules, and interact with hardware. Essentially the container becomes root on the host. Extremely dangerous in production — only ever needed for containers that legitimately need kernel-level access (like docker-in-docker setups).
Q13:

Why do many companies prefer private container registries over Docker Hub?

Mid

Answer

Private registries provide access control, compliance, privacy, and no rate limits.
Quick Summary: Private registries give companies control over access, SLA, scanning, and retention. Docker Hub is public by default, has rate limits, and can go down (affecting all your deployments). Private registries (ECR, GCR, Harbor) live in your own infrastructure, support role-based access, and integrate with vulnerability scanners.
Q14:

What performance impact occurs when containers run on Btrfs or ZFS storage drivers?

Mid

Answer

These drivers support snapshots but have slower random writes compared to overlay2.
Quick Summary: Btrfs and ZFS offer advanced features (snapshots, deduplication, checksumming) but are more complex than OverlayFS. They can have worse write performance in some workloads due to copy-on-write overhead at the filesystem level, especially with many small random writes. OverlayFS is faster and simpler for most container workloads.
Q15:

How does Docker optimize repeated downloads using shared layers?

Mid

Answer

Layers with identical checksums are reused, saving storage and reducing pull times.
Quick Summary: Docker images are built from layers, each identified by a SHA256 hash. If two images share a layer (same base image, same dependencies), Docker downloads and stores that layer only once. When pulling a new image, Docker skips layers it already has. This saves significant bandwidth and disk space in large deployments.
Q16:

What’s the difference between a dangling image and an unused image?

Mid

Answer

Dangling images are untagged leftovers; unused images are tagged but not referenced by containers.
Quick Summary: A dangling image is an untagged layer — it was previously tagged but got replaced when you rebuilt. It has no name or tag (:) and nothing uses it. An unused image has a tag but no running or stopped container is using it. docker image prune removes dangling images; --all removes unused ones too.
Q17:

How does Docker ensure network isolation using iptables?

Mid

Answer

Docker configures NAT, forwarding, and bridge rules for container isolation.
Quick Summary: Docker uses iptables rules to control traffic between containers, from containers to the host, and from the host to containers. Each container network gets its own iptables chain. Docker adds rules that forward traffic appropriately, NAT outbound traffic, and drop connections that shouldn't cross network boundaries.
Q18:

Why do CI pipelines use Alpine but production avoids it sometimes?

Mid

Answer

Alpine uses musl libc, causing compatibility issues; production prefers Debian/Ubuntu.
Quick Summary: Alpine is tiny and builds fast — great for CI where image size and pull time matter. In production, Alpine's musl libc can cause compatibility issues with software compiled for glibc (most Linux software). Debugging is harder too — no bash, no common tools. Production images often use debian-slim as a balance.
Q19:

How does Docker ensure each container gets its own hostname and domain name?

Mid

Answer

UTS namespace provides isolated hostnames for each container.
Quick Summary: Docker assigns each container its own UTS namespace — it gets its own hostname (defaults to the container ID) and its own domain name. You set a custom hostname with --hostname. This means the container thinks it's on its own machine with its own identity, regardless of the host's actual hostname.
Q20:

Why is it important to pin image versions in production?

Mid

Answer

Pinning ensures reproducible builds and stable environments.
Quick Summary: latest tag is mutable — it changes whenever someone pushes a new image. In production, a redeployment that pulls latest could get a completely different image than before. Pinning to a specific version (nginx:1.25.3 or an image digest) guarantees every deployment is identical and reproducible.
Q21:

How can a misconfigured healthcheck cause a container to restart repeatedly?

Mid

Answer

If healthchecks fail repeatedly, orchestrators keep restarting the container.
Quick Summary: If a healthcheck command always fails (wrong command, wrong endpoint, wrong port), Docker marks the container unhealthy after the failure threshold. Some orchestrators then restart the container repeatedly — causing a crash loop. Always test your healthcheck command manually inside the container before deploying.
Q22:

What happens if a container uses more memory than allowed by cgroup?

Mid

Answer

The kernel OOM killer terminates the container, logged as OOMKilled.
Quick Summary: The cgroup OOM (Out of Memory) killer terminates the container's process. Docker captures this and marks the container as exited with an OOM error. If the container has a restart policy, Docker restarts it — potentially in a loop if the memory limit is genuinely too small. Increase the limit or optimize the app's memory use.
Q23:

How does Docker Compose handle service dependencies using depends_on?

Mid

Answer

depends_on controls startup order but doesn’t wait for health; healthchecks are needed.
Quick Summary: depends_on controls startup order — it waits for the dependent service container to start (not to be healthy or ready). So even with depends_on, a web container might start before the database is actually accepting connections. For true readiness, combine depends_on with condition: service_healthy using a healthcheck.
Q24:

Why should build tools be removed from production images?

Mid

Answer

Build tools increase size and attack surface; multi-stage builds remove them.
Quick Summary: Build tools (compilers, make, npm in node_modules) are needed to build the app but not to run it. Including them bloats the image (hundreds of MBs) and adds unnecessary attack surface. Multi-stage builds solve this cleanly: build in a heavy stage, copy only the compiled artifact into a minimal final stage.
Q25:

How does Docker clean up unused networks and why do they accumulate?

Mid

Answer

Networks persist after containers are removed; docker network prune cleans them.
Quick Summary: Docker creates a network for each Compose project or docker network create call. When containers are removed but networks aren't explicitly removed, the networks linger — empty but taking up namespace. docker network prune removes all networks with no containers. Many CI runs create networks without cleaning them up.
Q26:

Why should environment variables never include multi-line secrets?

Mid

Answer

Env vars appear in inspect, process lists, and layer history; use secret managers.
Quick Summary: Environment variables are single-line strings. Multi-line secrets (certificates, private keys) break parsing and are easily exposed in logs, docker inspect output, or /proc/self/environ inside the container. Use Docker Secrets or volume-mount a secrets file for anything multi-line or sensitive.
Q27:

How does Docker isolate container PIDs?

Mid

Answer

PID namespaces isolate process trees so containers cannot see each other’s processes.
Quick Summary: Docker uses PID namespaces to give each container its own process ID space. Inside the container, the main process is always PID 1. Processes in different containers have overlapping PID numbers that are completely separate — a PID 42 in container A has nothing to do with PID 42 in container B or the host.
Q28:

What issue arises when running too many containers on one host?

Mid

Answer

Resource contention leads to CPU throttling, RAM pressure, IO saturation, and network slowness.
Quick Summary: Too many containers compete for CPU, RAM, disk I/O, and network bandwidth. The host kernel also has overhead per container (namespaces, cgroups, iptables rules). At extreme container counts, scheduling and networking overhead becomes significant. Right-size your containers and use orchestrators to spread load across multiple hosts.
Q29:

Why is it recommended to use non-root users inside images?

Mid

Answer

Root in a container is root on the host kernel; escaping container compromises the host.
Quick Summary: Running as root inside a container means file ownership in volumes is root — complex permission issues on the host. More critically, if the container is exploited and there's a container breakout vulnerability, the attacker immediately has root on the host. Non-root users contain the blast radius of any compromise.
Q30:

What’s the difference between soft and hard resource limits in Docker?

Mid

Answer

Soft limits may be exceeded temporarily; hard limits cannot be exceeded.
Quick Summary: Soft limits are advisory — the container can exceed them temporarily (memory soft limit = memory.soft_limit_in_bytes). Hard limits are enforced — the kernel kills processes that exceed them (memory hard limit = memory.limit_in_bytes). Docker's --memory flag sets the hard limit; --memory-reservation sets the soft limit.

Curated Sets for Docker

No curated sets yet. Group questions into collections from the admin panel to feature them here.

Ready to level up? Start Practice