Skip to main content

Expert Docker Interview Questions

Curated Expert-level Docker interview questions for developers targeting expert positions. 30 questions available.

Last updated:

Docker Interview Questions & Answers

Skip to Questions

Welcome to our comprehensive collection of Docker interview questions and answers. This page contains expertly curated interview questions covering all aspects of Docker, from fundamental concepts to advanced topics. Whether you're preparing for an entry-level position or a senior role, you'll find questions tailored to your experience level.

Our Docker interview questions are designed to help you:

  • Understand core concepts and best practices in Docker
  • Prepare for technical interviews at all experience levels
  • Master both theoretical knowledge and practical application
  • Build confidence for your next Docker interview

Each question includes detailed answers and explanations to help you understand not just what the answer is, but why it's correct. We cover topics ranging from basic Docker concepts to advanced scenarios that you might encounter in senior-level interviews.

Use the filters below to find questions by difficulty level (Entry, Junior, Mid, Senior, Expert) or focus specifically on code challenges. Each question is carefully crafted to reflect real-world interview scenarios you'll encounter at top tech companies, startups, and MNCs.

Questions

30 questions
Q1:

How does runc actually create a container process from an OCI bundle?

Expert

Answer

runc reads the OCI spec, sets namespaces, configures cgroups, mounts rootfs using pivot_root, drops capabilities, and execve()s the entrypoint to create an isolated environment.
Quick Summary: runc receives an OCI bundle: a config.json describing the container spec (namespaces, cgroups, mounts, capabilities) and a rootfs directory. runc creates the Linux namespaces, applies cgroup limits, sets up mounts, drops capabilities, and then executes the container's entry process. After handoff, runc exits and the shim takes over.
Q2:

Why is container root not equivalent to host root when user namespaces are enabled?

Expert

Answer

User namespaces remap UID 0 inside container to an unprivileged UID on host, preventing host-level root access.
Quick Summary: With user namespaces enabled, container UID 0 (root) maps to an unprivileged UID on the host (e.g., 100000). Inside the container, a process has full root capabilities within the namespace. But on the host, it's just a regular unprivileged user. Files created by "container root" are owned by UID 100000 on the host — not actual root.
Q3:

What are the security weaknesses of Docker’s default seccomp profile?

Expert

Answer

It blocks dangerous syscalls but still allows many exploitable ones; hardened profiles are needed for secure environments.
Quick Summary: Docker's default seccomp profile blocks about 44 of the ~300+ Linux syscalls — ones rarely needed by containers but dangerous if available: keyctl, ptrace, kexec_load, mount, unshare. It doesn't block everything risky, leaving many syscalls open for compatibility. A custom, tighter profile significantly hardens containers.
Q4:

How does Docker avoid race conditions between runc and dockerd during container creation?

Expert

Answer

Docker inserts a shim process between runc and dockerd so containers continue even if dockerd restarts.
Quick Summary: runc creates namespaces and the container process, then signals containerd-shim that it's ready. dockerd waits for the shim to confirm container start before returning. Internal locking and a handshake protocol prevent race conditions between runc exiting and the shim beginning to manage the container lifecycle.
Q5:

Why does overlay networking introduce additional latency compared to direct bridge networking?

Expert

Answer

VXLAN encapsulation adds overhead and packets may cross nodes, increasing RTT and CPU cost.
Quick Summary: Overlay networking adds VXLAN encapsulation: each packet gets wrapped in a UDP/VXLAN header (50+ bytes of overhead), processed by the kernel's VXLAN driver, and decapsulated on the other end. This is 2-3 extra kernel operations per packet vs direct bridge networking. At high packet rates, this latency adds up.
Q6:

How does Docker’s overlay implementation differ from Kubernetes CNI plugins?

Expert

Answer

Swarm uses libnetwork with built-in control plane; Kubernetes uses pluggable CNIs with independent data/control planes.
Quick Summary: Docker's overlay uses its own VXLAN control plane tightly coupled to the Docker daemon. Kubernetes CNI plugins are modular — any plugin implementing the CNI spec can manage networking (Flannel, Calico, Cilium, Weave). CNI plugins have richer policy support, eBPF options, and are decoupled from the container runtime.
Q7:

What is an OCI image manifest and how does Docker validate it?

Expert

Answer

Manifest lists layers and digests; Docker verifies SHA256 digests before extraction to detect tampering.
Quick Summary: An OCI image manifest is a JSON document listing the image's config blob and all layer descriptors (digest + size). Docker computes the SHA256 of each downloaded blob and compares it against the manifest. If any digest mismatches, the pull fails immediately. The manifest itself is signed in trusted registries using Notary.
Q8:

What causes multi-arch images to pull the wrong architecture occasionally?

Expert

Answer

Faulty manifest lists or CI misconfiguration lead clients to select incorrect architecture entries.
Quick Summary: Multi-arch images use a manifest list. If the registry or client has a bug in platform matching, it might resolve the manifest list to the wrong architecture. Also, if an image was pushed without a proper manifest list (just tagged with the wrong architecture label), Docker silently pulls and runs the wrong binary.
Q9:

Why do Docker layers become unshareable across images even if identical?

Expert

Answer

Layer digests depend on metadata like timestamps and file order; differing metadata prevents sharing.
Quick Summary: Two images with identical layers (same files) would ideally share those layers. But if they were built separately with different build contexts or slightly different ordering, even identical-content layers get different SHA256 hashes due to metadata differences in the layer tar archive. Docker can only share layers with bit-for-bit identical tarballs.
Q10:

How does buildx leverage BuildKit to execute builds in DAG form?

Expert

Answer

BuildKit builds a dependency graph, parallelizes stages, and caches keyed intermediate states.
Quick Summary: BuildKit models a Dockerfile as a DAG (Directed Acyclic Graph) of build steps. Independent steps (multiple COPY from different stages) run in parallel. buildx extends this to run DAG stages on multiple build nodes simultaneously — cross-platform builds run each architecture's stages in parallel, dramatically cutting multi-arch build time.
Q11:

Why do Docker push and pull operations feel slow even for small layer files?

Expert

Answer

Registry communication involves multiple API calls and digest checks causing high latency.
Quick Summary: Docker push/pull transfers individual layer tarballs sequentially or in small batches. Network round-trips for manifest lookups, layer existence checks (HEAD requests), and actual data transfer all add up. Even a 50MB layer takes multiple round-trips. Content-delivery networks and regional registry mirrors cut pull times significantly.
Q12:

How does Docker prevent cross-repo layer caching attacks in registries?

Expert

Answer

Layer digests require repo-scoped authorization; knowing a digest is not enough to download.
Quick Summary: A malicious image could include a layer that's identical (same SHA256) to a popular base image layer. If the registry allowed shared layers across repos, one user could trick another into trusting modified content. Registries use per-repository blob storage and validate that layers belong to the repo that uploaded them.
Q13:

Why can Docker logging drivers cause kernel buffer pressure?

Expert

Answer

High log throughput stresses buffers, causing IO throttling, CPU spikes, and dropped logs.
Quick Summary: Docker log drivers (json-file, syslog, journald) write log data to kernel buffers. If the logging backend can't consume logs fast enough, kernel buffers fill up. Backpressure propagates to the container's write() calls — the app blocks on logging. In extreme cases this stalls the entire container, making it appear hung.
Q14:

How are checkpoint and restore used for live migration?

Expert

Answer

CRIU saves process state, TCP sessions, and memory; Docker restores it on another host.
Quick Summary: CRIU (Checkpoint/Restore In Userspace) freezes a running container, captures its full memory state and process tree to disk, then restores it on another host. Docker supports experimental CRIU integration for container live migration. It's used for zero-downtime maintenance, workload migration, and speeding up slow startup apps.
Q15:

What conditions cause CRIU checkpointing to fail?

Expert

Answer

Uncheckpointable operations like ptrace, inotify, special FDs, or kernel mismatches cause failure.
Quick Summary: CRIU fails when containers use kernel features it can't serialize: open network connections (TCP state is hard to migrate), GPU memory, kernel objects without CRIU support, or processes with kernel threads. Some syscalls are non-restartable. Containers using host networking or --privileged are also problematic for CRIU.
Q16:

What is the attack surface of Docker’s network namespace sandboxing?

Expert

Answer

Misconfigured capabilities like CAP_NET_ADMIN allow route manipulation or packet sniffing.
Quick Summary: Docker uses separate network namespaces per container, but they share the host kernel's networking stack. Vulnerabilities in the host kernel's network code (iptables, VXLAN driver) can cross namespace boundaries. VXLAN traffic is unencrypted by default — an attacker on the same L2 network can sniff overlay traffic between containers.
Q17:

How does Docker implement deterministic CPU throttling across containers?

Expert

Answer

cgroups v2 applies weighted fair sharing using quotas and CPU weights.
Quick Summary: Docker uses the cpu.cfs_period_us and cpu.cfs_quota_us cgroup parameters to implement CPU throttling using the Completely Fair Scheduler (CFS). A container with --cpus=0.5 gets 50ms of CPU time per 100ms period. This is deterministic and enforced by the kernel regardless of host CPU load.
Q18:

How do registries deduplicate identical layers across millions of images?

Expert

Answer

Content-addressing stores each digest once and tracks references to avoid duplication.
Quick Summary: Registries use content-addressable storage: layers are stored by their SHA256 digest globally. When two images share an identical layer, only one copy is stored — any image manifest can reference it by digest. This massively reduces storage for large registries (like Docker Hub) where many images share the same base layers.
Q19:

Why does Docker fail to garbage-collect layers sometimes?

Expert

Answer

Running or stopped containers still reference layers until removed.
Quick Summary: Docker's GC skips layers still referenced by any image manifest or build cache entry. If a manifest or BuildKit cache entry is corrupted or not properly cleaned up, it holds a reference forever — the layer is never collected. Clearing the build cache (docker builder prune) and dangling images usually resolves stuck GC.
Q20:

How do kernel security modules secure Docker collectively?

Expert

Answer

AppArmor/SELinux enforce MAC, Seccomp filters syscalls, and capabilities reduce privileges.
Quick Summary: AppArmor restricts filesystem access, network operations, and capabilities via profiles. SELinux uses type enforcement labels — every process and file has a label, and policy defines allowed interactions. Seccomp filters system calls. Used together (AppArmor + seccomp OR SELinux + seccomp), they provide defense-in-depth beyond namespace isolation.
Q21:

Why is host kernel version critical for Docker performance and security?

Expert

Answer

Kernel defines namespaces, cgroups, OverlayFS, eBPF, and seccomp features.
Quick Summary: Docker containers share the host kernel — there's no hypervisor between them. A kernel vulnerability (privilege escalation, namespace escape) affects all containers on that host. Newer kernel features (cgroups v2, eBPF, user namespaces) provide better isolation. Keeping the host kernel patched is critical for container security.
Q22:

Why does running systemd inside Docker break container assumptions?

Expert

Answer

systemd expects full init control and multiple processes, conflicting with container isolation.
Quick Summary: systemd expects to be PID 1 with full access to cgroups, D-Bus, and systemd-specific kernel interfaces. Containers restrict all of these — no cgroup write access, no D-Bus, limited capabilities. systemd inside a container needs --privileged to work at all, which defeats container isolation. Use supervisord or tini instead for multi-process containers.
Q23:

How do multi-tenant platforms prevent noisy-neighbor problems?

Expert

Answer

Strict CPU, memory, IO limits and cgroup throttling isolate resource usage.
Quick Summary: Noisy-neighbor prevention uses cgroups limits (CPU, memory, I/O) to cap each tenant's resource usage. Separate namespaces prevent process and network visibility. Separate Docker networks enforce traffic isolation. At the platform level, containers are scheduled on separate nodes or NUMA domains for the most sensitive tenants.
Q24:

How does Docker isolate high-resolution timers?

Expert

Answer

Time namespaces provide isolated clocks preventing timing attacks.
Quick Summary: By default, containers share the host's clock and can read high-resolution timers. Docker doesn't expose a timer namespace (it's a newer kernel feature). A container can observe timing information useful for side-channel attacks (Spectre-class). Some hardened environments mount a fake /proc/timer_list to prevent this.
Q25:

Why is Docker used with immutable infrastructure principles?

Expert

Answer

Containers are stateless and replaceable, minimizing configuration drift.
Quick Summary: Immutable infrastructure means servers are never modified after deployment — if you need a change, you build a new image and replace containers, never ssh in and patch. Docker makes this natural: images are versioned, containers are ephemeral, rollbacks are just pulling the previous image. State lives in external volumes or databases.
Q26:

Why is image attestation important in enterprise pipelines?

Expert

Answer

Attestation proves who built the image and ensures supply-chain integrity.
Quick Summary: Image attestation is a signed statement (using Sigstore/cosign) proving an image was built from a specific source commit, by a specific CI pipeline, with a verified build process. It lets you verify supply chain integrity — not just that the image hash matches, but that it was built correctly and wasn't tampered with post-build.
Q27:

How does Docker’s network sandbox interact with eBPF plugins like Cilium?

Expert

Answer

eBPF replaces iptables for fast packet filtering and direct routing.
Quick Summary: Docker's network sandbox creates a network namespace for each container. eBPF-based CNI plugins like Cilium attach eBPF programs to the veth interfaces at the namespace boundary, intercepting packets as they enter/leave. Cilium can enforce L7 policies (HTTP path filtering), collect observability metrics, and replace iptables entirely.
Q28:

Why did orchestrators shift from Docker runtime to containerd and CRI?

Expert

Answer

containerd provides slim, fast, Kubernetes-native runtime integration.
Quick Summary: Docker's runtime (originally dockerd → runc) was tightly coupled. Kubernetes standardized the Container Runtime Interface (CRI), allowing any CRI-compliant runtime. containerd (Docker's own) and CRI-O emerged as lighter, purpose-built runtimes for Kubernetes — no docker CLI overhead, just the container lifecycle management Kubernetes needs.
Q29:

How does Docker optimize overlay networks without full mesh tunnels?

Expert

Answer

Swarm uses gossip plus VXLAN mapping to create tunnels only when needed.
Quick Summary: Docker overlay uses VXLAN in multicast or unicast mode. For large clusters, full mesh unicast VXLAN tunnels between every pair of nodes scale as O(n²). Modern solutions use BGP EVPN to distribute MAC/IP mappings, eliminating the need for full mesh tunnels. Cilium replaces VXLAN entirely with direct routing using eBPF.
Q30:

What causes fsync storms inside containers?

Expert

Answer

OverlayFS merges layers, causing heavy IO waits and slow database transactions.
Quick Summary: fsync storms happen when many processes inside a container simultaneously flush data to disk (database checkpoints, log rotation, package installs). The container's writes go through OverlayFS and the host filesystem — fsync calls queue up, saturating I/O. Databases inside containers need direct volume access, not the overlay filesystem, to avoid this.

Curated Sets for Docker

No curated sets yet. Group questions into collections from the admin panel to feature them here.

Ready to level up? Start Practice