Skip to main content

Junior Kubernetes Interview Questions

Curated Junior-level Kubernetes interview questions for developers targeting junior positions. 35 questions available.

Last updated:

Kubernetes Interview Questions & Answers

Skip to Questions

Welcome to our comprehensive collection of Kubernetes interview questions and answers. This page contains expertly curated interview questions covering all aspects of Kubernetes, from fundamental concepts to advanced topics. Whether you're preparing for an entry-level position or a senior role, you'll find questions tailored to your experience level.

Our Kubernetes interview questions are designed to help you:

  • Understand core concepts and best practices in Kubernetes
  • Prepare for technical interviews at all experience levels
  • Master both theoretical knowledge and practical application
  • Build confidence for your next Kubernetes interview

Each question includes detailed answers and explanations to help you understand not just what the answer is, but why it's correct. We cover topics ranging from basic Kubernetes concepts to advanced scenarios that you might encounter in senior-level interviews.

Use the filters below to find questions by difficulty level (Entry, Junior, Mid, Senior, Expert) or focus specifically on code challenges. Each question is carefully crafted to reflect real-world interview scenarios you'll encounter at top tech companies, startups, and MNCs.

Questions

35 questions
Q1:

How does Kubernetes ensure Pods are rescheduled automatically when a node fails?

Junior

Answer

Scheduler detects NotReady nodes, marks Pods lost, and recreates them on healthy nodes via their controllers.
Quick Summary: When a node fails, its kubelet stops sending heartbeats. The node controller marks the node NotReady and eventually evicts its Pods. Workload controllers (Deployment, ReplicaSet) notice the Pod count dropped below desired and create new Pods on healthy nodes. This is automatic — no human intervention needed.
Q2:

Why do Pods restart even when their containers exit with code 0?

Junior

Answer

RestartPolicy Always forces Pod restarts regardless of exit code.
Quick Summary: The restartPolicy: Always setting restarts containers regardless of exit code — even 0 (clean exit). This is intentional for long-running services that should never stop. If your container exits cleanly with 0 but you don't want it restarted, use restartPolicy: OnFailure (restart only on non-zero exit) or restartPolicy: Never.
Q3:

What is the role of kube-proxy in networking?

Junior

Answer

kube-proxy manages iptables/IPVS rules to route Service traffic to Pod endpoints.
Quick Summary: kube-proxy runs on every node and programs network rules (iptables or IPVS) that implement Service virtual IPs. When a Pod connects to a Service ClusterIP, kube-proxy's rules intercept the packet and DNAT it to a random healthy Pod backing that Service. It's how Kubernetes load-balances traffic to Services without a dedicated load balancer per Service.
Q4:

How do readiness probes differ from liveness probes?

Junior

Answer

Readiness controls traffic routing; Liveness decides if a Pod should be restarted.
Quick Summary: Readiness probe: is this container ready to receive traffic? If it fails, the Pod is removed from the Service endpoints — traffic stops going to it, but it keeps running. Liveness probe: is this container alive? If it fails, kubelet kills and restarts the container. Readiness gates traffic; liveness recovers from stuck processes.
Q5:

What is the Kubernetes API Server responsible for?

Junior

Answer

It validates requests, stores configuration, exposes REST endpoints, and communicates with etcd.
Quick Summary: The API Server is the central hub for all cluster communication — every component (kubelet, scheduler, controllers) reads and writes state through it. It validates and persists objects to etcd, enforces authentication and authorization, runs admission controllers, and serves the Kubernetes REST API. Nothing happens in the cluster without going through it.
Q6:

Why is using latest tag dangerous in Kubernetes deployments?

Junior

Answer

Kubernetes cannot track version changes; rolling updates and rollbacks become unpredictable.
Quick Summary: latest is a mutable tag — anyone can push a new image with that tag at any time. If your Deployment uses latest, a node that pulls the image gets whatever was latest at that moment — potentially a different version than other nodes. Pin to an immutable tag (like a Git SHA or semantic version) so every node runs exactly the same image.
Q7:

What is the difference between a DaemonSet and a Deployment?

Junior

Answer

Deployment runs N replicas; DaemonSet ensures one Pod per node.
Quick Summary: DaemonSet ensures exactly one Pod runs on every node (or a selected subset) — used for node-level agents like log collectors, monitoring, or network plugins. Deployment manages a pool of Pods across the cluster without caring which node they land on. DaemonSet scales with nodes; Deployment scales with replicas.
Q8:

How does a Horizontal Pod Autoscaler know when to scale?

Junior

Answer

HPA monitors CPU, memory, or custom metrics and adjusts replicas.
Quick Summary: HPA watches metrics (CPU usage, memory, custom metrics from Prometheus) and compares them to target values you define. If CPU usage exceeds 70% target across Pods, it increases replica count. If it drops well below, it scales down. It queries the metrics-server (or custom metrics adapter) and adjusts replicas automatically.
Q9:

Why do StatefulSets require a Headless Service?

Junior

Answer

Headless Services provide stable DNS entries for each Pod.
Quick Summary: StatefulSet Pods need stable network identities — pod-0, pod-1, pod-2. A regular Service would load-balance across them randomly. A Headless Service (ClusterIP: None) creates DNS records for each Pod individually (pod-0.service.namespace.svc.cluster.local) so clients can address specific Pods directly, which databases and distributed systems require.
Q10:

What is a PersistentVolume (PV)?

Junior

Answer

PV is cluster storage that persists beyond Pod lifecycles.
Quick Summary: A PersistentVolume (PV) is a piece of storage provisioned by an admin or dynamically by a StorageClass — it exists in the cluster independently of Pods. It abstracts the underlying storage (NFS, AWS EBS, GCE PD) into a Kubernetes resource that can be claimed and used by Pods without them knowing the storage details.
Q11:

What is the difference between PV and PVC?

Junior

Answer

PV is actual storage; PVC is user request bound to a matching PV.
Quick Summary: PV (PersistentVolume) is the actual storage resource — provisioned and available in the cluster. PVC (PersistentVolumeClaim) is a request for storage from a Pod — "I need 10Gi of ReadWriteOnce storage." Kubernetes binds a matching PV to the PVC. Pods use the PVC; they don't interact with the PV directly.
Q12:

What is a StorageClass used for?

Junior

Answer

It defines dynamic provisioning of volumes using provisioners and parameters.
Quick Summary: StorageClass defines how to dynamically provision PersistentVolumes on demand. When a PVC requests storage, Kubernetes uses the StorageClass provisioner (AWS EBS, GCE PD, NFS) to automatically create a PV that matches. Without StorageClass, admins must manually pre-provision PVs — dynamic provisioning via StorageClass is the modern approach.
Q13:

Why do Pods sometimes remain in a Terminating state indefinitely?

Junior

Answer

Due to stuck finalizers, runtime issues, volume problems, or network partitions.
Quick Summary: Pods get stuck in Terminating when a finalizer is preventing deletion — the controller responsible for removing the finalizer never does so (it might be dead or buggy). You can force-delete with kubectl delete pod --force --grace-period=0, but this bypasses cleanup logic. Investigate which finalizer is blocking before force-deleting.
Q14:

How does Kubernetes perform rolling updates without downtime?

Junior

Answer

It creates new Pods before terminating old ones, controlled by maxSurge and maxUnavailable.
Quick Summary: Rolling updates gradually replace old Pods with new ones. Kubernetes creates new Pods with the new image, waits for them to pass readiness probes, then terminates old ones — maxSurge controls how many extra Pods are created, maxUnavailable controls how many old Pods can be down at once. Traffic only routes to ready Pods throughout.
Q15:

What is the purpose of a Pod Disruption Budget?

Junior

Answer

PDB ensures minimum available replicas during voluntary disruptions.
Quick Summary: A Pod Disruption Budget (PDB) sets the minimum number of Pods that must stay running during voluntary disruptions (node drains, upgrades). It prevents operations like kubectl drain from taking down so many Pods that your service loses quorum or availability. Kubernetes refuses to evict Pods if it would violate the PDB.
Q16:

Why are jobs used instead of Deployments for batch tasks?

Junior

Answer

Jobs ensure tasks run to completion, retrying on failure.
Quick Summary: Deployments run processes continuously and restart them on exit — wrong for batch work that should run once and complete. Jobs run a Pod to completion (exit 0) then stop — they don't restart unnecessarily. Jobs track success/failure, support parallelism (run N tasks simultaneously), and have retry logic for failures.
Q17:

What is the role of a CronJob?

Junior

Answer

CronJobs schedule recurring jobs based on cron expressions.
Quick Summary: A CronJob creates Jobs on a cron schedule — runs a task every hour, every night, every Monday. It's Kubernetes' equivalent of a Unix cron job but for containerized tasks. CronJob manages the schedule; the Job manages execution and retries. Useful for database backups, report generation, cache warming, and cleanup tasks.
Q18:

What is image pull policy and why does it matter?

Junior

Answer

It controls when Kubernetes pulls images; misconfiguration leads to stale or missing images.
Quick Summary: Image pull policy controls when Kubernetes pulls an image from the registry: Always (pull every time — slow but always fresh), IfNotPresent (pull only if not cached — faster but may use stale images), Never (never pull — use what's cached). For production with mutable tags (like latest), Always ensures you get the latest image on each restart.
Q19:

Why is RBAC important in Kubernetes?

Junior

Answer

RBAC restricts actions to authorized users, preventing unauthorized access.
Quick Summary: RBAC controls who can do what in the cluster. Without it, any Pod with a ServiceAccount could read Secrets, create new Pods, or delete Deployments. RBAC lets you grant least-privilege access — a Pod can only read its own ConfigMap, a developer can only deploy to the staging namespace. Essential for multi-team and multi-tenant clusters.
Q20:

What is a ServiceAccount?

Junior

Answer

A ServiceAccount provides identity for Pods to authenticate to the API server.
Quick Summary: A ServiceAccount is an identity for Pods — it determines what API permissions the Pod has. By default, every Pod gets the default ServiceAccount with a mounted token. Assign a specific ServiceAccount with narrowly-scoped RBAC roles to Pods that need to interact with the Kubernetes API (like operators and controllers).
Q21:

Why disable auto-mount of ServiceAccount tokens in some Pods?

Junior

Answer

Pods without API needs should not get credentials to reduce risk.
Quick Summary: By default, Pods auto-mount a ServiceAccount token that allows API calls to the cluster. Most application Pods don't need to call the Kubernetes API — they just serve web requests or process data. Auto-mounting an unused token is an unnecessary attack surface. Disable it for workloads that don't interact with the cluster API.
Q22:

How do Kubernetes Labels differ from Annotations?

Junior

Answer

Labels are used for selection; annotations store metadata.
Quick Summary: Labels are key-value pairs used for selection — Deployments select their Pods by label, Services route to Pods by label. They're queryable and used by controllers. Annotations are key-value metadata for tools and humans — build info, CI pipeline URLs, documentation links. Annotations don't affect how Kubernetes selects or routes objects.
Q23:

What is tainting a node and why is it used?

Junior

Answer

Taints prevent scheduling unless Pods have tolerations.
Quick Summary: A taint on a node repels Pods that don't explicitly tolerate it. Use it to reserve nodes for specific workloads — taint GPU nodes so only GPU-requesting Pods land there, taint production nodes so dev Pods can't accidentally run on them. Tolerations on Pods say "I'm okay with this taint" — they can still land on tainted nodes.
Q24:

What are Node Selectors?

Junior

Answer

They restrict scheduling to nodes with specific labels.
Quick Summary: Node selectors constrain which nodes a Pod can run on by matching node labels. Add nodeSelector: disktype: ssd to a Pod and it only schedules on nodes labeled disktype=ssd. Simple and effective for basic placement, but only supports exact label matching — no AND/OR logic, no preference vs requirement.
Q25:

Why is node affinity more powerful than node selectors?

Junior

Answer

Node affinity supports expressions and preferred schedules.
Quick Summary: Node affinity supports complex expressions — require OR prefer, multiple conditions with AND/OR logic, and the ability to express preferences (schedule here if possible) vs hard requirements (must schedule here). It also supports scheduling based on runtime conditions (e.g., during Pod execution). Node selectors only support exact label matching.
Q26:

What is Pod Affinity/Anti-Affinity?

Junior

Answer

Affinity co-locates Pods; anti-affinity spreads them for HA.
Quick Summary: Pod Affinity schedules a Pod near other Pods with specific labels — useful when two services communicate heavily and benefit from co-location. Pod Anti-Affinity spreads Pods away from each other — put replicas on different nodes so a single node failure doesn't take down all replicas. Both use topology keys (node, zone, region) to define "near" and "far."
Q27:

Why is Ingress preferred over multiple LoadBalancer Services?

Junior

Answer

Ingress consolidates routing and reduces cost.
Quick Summary: Each LoadBalancer Service provisions an external load balancer — expensive and slow to provision (cloud load balancers can take minutes and cost money). With Ingress, one external load balancer feeds one Ingress Controller, which routes to many Services based on host and path rules. Much cheaper, faster, and easier to manage at scale.
Q28:

How does Kubernetes handle service discovery internally?

Junior

Answer

It uses DNS-based discovery and kube-proxy load balancing.
Quick Summary: Kubernetes runs CoreDNS as the cluster DNS server. Every Service gets a DNS entry: servicename.namespace.svc.cluster.local. When a Pod resolves a Service name, CoreDNS returns the Service's ClusterIP. kube-proxy then routes traffic from the ClusterIP to a healthy Pod. No hardcoded IPs needed — just use the service name.
Q29:

What is the difference between a Secret of type Opaque and DockerConfigJson?

Junior

Answer

Opaque stores key-values; DockerConfigJson stores registry credentials.
Quick Summary: Opaque type is a generic key-value Secret — base64-encoded arbitrary data like passwords and API keys. DockerConfigJson (kubernetes.io/dockerconfigjson) stores Docker registry credentials — used by kubelet when pulling private images. Kubernetes looks for imagePullSecrets on Pod specs and uses DockerConfigJson Secrets to authenticate to the registry.
Q30:

Why avoid large ConfigMaps?

Junior

Answer

Large ConfigMaps exceed API limits and slow Pod startup.
Quick Summary: ConfigMaps are stored in etcd and mounted into Pods. Very large ConfigMaps (MBs of config files) increase etcd load, slow down watch propagation, and consume more memory on every API server that caches them. The etcd object size limit is 1.5MB by default — exceeding it causes creation to fail. Split large configs or use a dedicated config service.
Q31:

Why might a Pod stay in Pending state indefinitely?

Junior

Answer

Due to insufficient resources, PVC issues, taints, or affinity mismatch.
Quick Summary: Pods stay Pending when: no node has enough resources (insufficient CPU/memory), no node matches Pod's nodeSelector or affinity rules, all matching nodes have taints the Pod doesn't tolerate, PVCs can't be bound (no matching PV), or an image pull limit is blocking. Describe the Pod (kubectl describe pod) to see which scheduler condition is failing.
Q32:

Why avoid using hostPath in production?

Junior

Answer

hostPath ties Pods to nodes, risks corruption, and reduces portability.
Quick Summary: hostPath mounts a directory from the host node's filesystem into the Pod. This creates a tight coupling between the Pod and the specific node it runs on (non-portable). It bypasses storage abstractions, allows Pods to read/write sensitive host files, and creates a security risk. Use PVCs with proper StorageClasses for persistent storage instead.
Q33:

How does Kubernetes prevent controller conflicts on resources?

Junior

Answer

Declarative reconciliation ensures only the owning controller manages the resource.
Quick Summary: Kubernetes uses optimistic concurrency with resourceVersion — each object has a version that increments on every write. Controllers must include the current resourceVersion when updating — if two controllers try to update the same object simultaneously, only one wins (the other gets a conflict error and must retry). No distributed lock needed.
Q34:

What happens when you edit a Pod directly instead of its Deployment?

Junior

Answer

Deployment overwrites changes by recreating Pods with original spec.
Quick Summary: You can't — running Pods are immutable. kubectl edit or kubectl patch on a Pod only changes mutable fields (like labels). The Pod spec (containers, images) can't be changed on a running Pod. To update a Pod, you update its Deployment or ReplicaSet — Kubernetes creates new Pods with the updated spec and removes the old ones.
Q35:

Why should readiness checks be mandatory for production Deployments?

Junior

Answer

Readiness prevents routing traffic to uninitialized Pods.
Quick Summary: Without readiness probes, Kubernetes assumes a container is ready as soon as it starts — but the app might still be initializing (loading cache, connecting to DB). Traffic sent to an unready Pod returns errors. A readiness probe ensures traffic only reaches Pods that are actually ready to serve. Critical for zero-downtime deployments.

Curated Sets for Kubernetes

No curated sets yet. Group questions into collections from the admin panel to feature them here.

Ready to level up? Start Practice