Skip to main content

Mid NodeJS Interview Questions

Curated Mid-level NodeJS interview questions for developers targeting mid positions. 30 questions available.

Last updated:

NodeJS Interview Questions & Answers

Skip to Questions

Welcome to our comprehensive collection of NodeJS interview questions and answers. This page contains expertly curated interview questions covering all aspects of NodeJS, from fundamental concepts to advanced topics. Whether you're preparing for an entry-level position or a senior role, you'll find questions tailored to your experience level.

Our NodeJS interview questions are designed to help you:

  • Understand core concepts and best practices in NodeJS
  • Prepare for technical interviews at all experience levels
  • Master both theoretical knowledge and practical application
  • Build confidence for your next NodeJS interview

Each question includes detailed answers and explanations to help you understand not just what the answer is, but why it's correct. We cover topics ranging from basic NodeJS concepts to advanced scenarios that you might encounter in senior-level interviews.

Use the filters below to find questions by difficulty level (Entry, Junior, Mid, Senior, Expert) or focus specifically on code challenges. Each question is carefully crafted to reflect real-world interview scenarios you'll encounter at top tech companies, startups, and MNCs.

Questions

30 questions
Q1:

How does the Node.js event loop differ from traditional multi-threaded servers?

Mid

Answer

Traditional servers create threads per request. Node.js uses a single-threaded event loop and offloads heavy tasks to worker threads, making it efficient for I/O-heavy workloads.
Quick Summary: Traditional multi-threaded servers (Java, .NET) create a new thread per request — threads are expensive (1–2MB stack each) and context-switching adds overhead. Node.js uses one thread and the event loop — instead of waiting, it registers callbacks and moves on. For I/O-heavy workloads, this handles far more concurrent requests with far less memory.
Q2:

What are microtasks and macrotasks in the Node.js event loop?

Mid

Answer

Microtasks include promise callbacks and queueMicrotask. Macrotasks include timers, I/O callbacks, and setImmediate. Microtasks run before macrotasks within each event loop cycle.
Quick Summary: Microtasks (Promise callbacks, queueMicrotask) run before the event loop moves to the next phase — they drain completely between each event loop tick. Macrotasks (setTimeout, setInterval, I/O callbacks) are picked up one at a time per event loop iteration. Microtasks have higher priority — a Promise chain can delay macrotasks if it keeps adding more microtasks.
Q3:

What is backpressure in Node.js streams?

Mid

Answer

Backpressure occurs when a data source produces data faster than the consumer can handle. Streams manage this using internal buffers and methods like pause() and resume().
Quick Summary: Backpressure happens when the consumer can't keep up with the producer. In Node streams, if you pipe a fast readable to a slow writable, the data buffers in memory. The readable should pause (readable.pause()) when the writable buffer fills up. Node streams handle this automatically when you use pipe() — otherwise you must manage it manually.
Q4:

How do worker threads improve performance in Node.js?

Mid

Answer

Worker threads run CPU-intensive tasks in parallel without blocking the main thread. They are ideal for cryptography, parsing, and other heavy computations.
Quick Summary: Worker threads run JavaScript in a separate V8 instance with its own event loop — in a real OS thread. Unlike cluster (separate processes), workers share memory (SharedArrayBuffer). Use them for CPU-heavy tasks (image processing, compression, heavy computation) that would block the event loop and hurt response times for all other requests.
Q5:

Why are synchronous functions discouraged in Node.js?

Mid

Answer

Synchronous functions block the event loop, delaying other requests and reducing server responsiveness.
Quick Summary: Synchronous functions block the event loop for their entire duration. While one request's sync operation runs (database query, file read, computation), all other incoming requests queue up and wait. In a web server handling hundreds of concurrent requests, a single 100ms sync operation can add 100ms latency to every queued request.
Q6:

What is middleware order in Express.js and why is it important?

Mid

Answer

Middleware runs in the order defined. Logging, authentication, and parsers must be placed correctly to ensure proper request handling.
Quick Summary: Express applies middleware in the exact order you call app.use(). If you define a route before the authentication middleware, unauthenticated requests reach that route. Body parser must run before routes that read req.body. Logger should be first. Error handler must be last. Order determines behavior — get it wrong and things break silently.
Q7:

What is a reverse proxy and why is Node.js used behind one?

Mid

Answer

Reverse proxies like Nginx handle SSL, caching, and load balancing. Node apps use them for better security and performance.
Quick Summary: A reverse proxy (nginx, Caddy) sits in front of Node.js and handles TLS termination, load balancing across multiple Node instances, serving static files, rate limiting, and HTTP/2. Node.js handles the application logic. This separation is standard practice — nginx is much better at serving static files and handling TLS than Node.js.
Q8:

What causes memory leaks in Node.js?

Mid

Answer

Memory leaks occur due to unused global variables, unremoved event listeners, unclosed timers, or improper caching.
Quick Summary: Memory leaks in Node.js happen when objects accumulate in memory and are never garbage collected because something still holds a reference. Common causes: closures capturing large objects, event listeners never removed (emitter.on without emitter.off), global caches growing without bounds, and circular references in certain situations.
Q9:

What are child processes in Node.js?

Mid

Answer

Child processes allow running OS-level processes. Useful for executing scripts, shell commands, or parallel computation.
Quick Summary: Child processes let you run external programs or additional Node.js scripts from your app — spawning a Python script, running a shell command, or offloading heavy computation. child_process.spawn() starts a process and streams its output; exec() runs a command and returns the full output as a string. Workers communicate via IPC messaging.
Q10:

What is the purpose of the crypto module in Node.js?

Mid

Answer

The crypto module provides hashing, encryption, decryption, signing, and random data generation for secure operations.
Quick Summary: The crypto module provides cryptographic functions — hashing (SHA-256, bcrypt via third-party), HMAC signatures, encryption/decryption (AES), generating random bytes, and creating digital signatures. Use it for password hashing, JWT signing verification, generating secure tokens, and encrypting sensitive data. Never roll your own crypto — use this.
Q11:

How does Express handle error propagation?

Mid

Answer

Calling next(error) skips normal middleware and invokes error-handling middleware for consistent error responses.
Quick Summary: In Express, if a route handler or middleware throws an error or calls next(err), Express automatically skips to the first error-handling middleware (one with 4 parameters: err, req, res, next). async handlers need try/catch and must pass errors to next() explicitly — Express doesn't automatically catch Promise rejections without a wrapper or asyncHandler utility.
Q12:

What is an HTTP agent in Node.js?

Mid

Answer

An HTTP agent manages and reuses TCP connections, improving the performance of outgoing HTTP requests.
Quick Summary: An HTTP Agent manages connection pooling for outgoing HTTP requests — it maintains a pool of keep-alive TCP connections to the same server instead of creating a new TCP connection per request. Reusing connections avoids the TCP and TLS handshake overhead. In production, configure maxSockets to tune how many connections your app maintains per host.
Q13:

What is the difference between setImmediate and setTimeout?

Mid

Answer

setImmediate runs after the poll phase. setTimeout runs after a delay. Their order may depend on event loop timing.
Quick Summary: Both schedule a callback, but at different event loop phases. setImmediate fires in the check phase — after I/O callbacks in the current iteration. setTimeout(fn, 0) fires in the timers phase of the next iteration. Inside an I/O callback, setImmediate always fires first. Outside I/O, the order is non-deterministic and depends on process performance.
Q14:

How does Node.js clustering differ from worker threads?

Mid

Answer

Clustering creates multiple Node processes sharing one port. Worker threads share memory inside one process. Clustering boosts request throughput; workers boost compute tasks.
Quick Summary: Clustering forks separate OS processes — each process has its own memory, event loop, and V8 instance. Worker threads run in one process with shared memory and their own V8 instances. Clustering is for scaling network request handling across CPU cores; worker threads are for offloading CPU-intensive work to avoid blocking the event loop.
Q15:

What is ESM support in Node.js?

Mid

Answer

ESM enables import/export syntax, better static analysis, and improved compatibility with modern JavaScript modules.
Quick Summary: Node.js supports ES Modules (ESM) natively since v12+. Use .mjs extension or add "type": "module" to package.json to use import/export syntax. ESM is statically analyzable (enables tree-shaking), asynchronous by design, and the standard going forward. The main friction: mixing ESM and CommonJS packages still requires care.
Q16:

What is the purpose of the package exports field?

Mid

Answer

The exports field controls which files of a package are accessible, helping secure internal modules and define public APIs.
Quick Summary: The exports field in package.json defines your package's public API — which files are accessible when someone imports your package and from which paths. It lets you have multiple entry points (CJS, ESM, browser builds), block access to internal files, and provide conditional exports based on the environment (node vs browser). It replaces the old "main" field.
Q17:

How does Node.js handle large JSON parsing?

Mid

Answer

Large JSON parsing blocks the event loop. Solutions include streaming parsers, chunk processing, or using worker threads.
Quick Summary: JSON.parse() is synchronous and blocks the event loop. For large JSON payloads (multi-MB), this blocking can noticeably delay other requests. Alternatives: stream-json to parse incrementally without blocking, or worker threads to parse in parallel without tying up the main thread. Keep JSON responses small; paginate or filter large data.
Q18:

What is an ORM in Node.js and why use one?

Mid

Answer

ORMs like Sequelize or TypeORM map database tables to objects, simplifying queries and enforcing consistency.
Quick Summary: An ORM (Object-Relational Mapper) lets you interact with your database using JavaScript objects instead of raw SQL queries. Mongoose is the ORM for MongoDB; Sequelize and Prisma are popular for SQL databases. Benefits: type safety, schema validation, simpler query syntax. Trade-off: can obscure performance issues from inefficient generated queries.
Q19:

How does Mongoose handle schema validation?

Mid

Answer

Mongoose validates data using schema rules such as types, required fields, enums, and custom validators.
Quick Summary: Mongoose schemas define what fields a document can have, their types, and validation rules (required, minLength, enum values). When you call save() or create(), Mongoose validates the data against the schema before inserting. Schema-level validators run in JavaScript — they don't replace database-level constraints but catch most common errors early.
Q20:

What is optimistic vs pessimistic locking?

Mid

Answer

Optimistic locking checks data versions and assumes few conflicts. Pessimistic locking blocks rows until transactions complete.
Quick Summary: Optimistic locking assumes conflicts are rare — reads happen without locks, and at write time you check if the record changed since you read it (via version field). If changed, retry. Pessimistic locking locks the record upfront, preventing others from modifying it until you're done. Optimistic is better for high-read, low-conflict workloads; pessimistic for frequent write conflicts.
Q21:

What is middleware composition?

Mid

Answer

Middleware composition chains multiple middleware functions into a reusable sequence for tasks like validation or authorization.
Quick Summary: Middleware composition is combining multiple middleware functions into a pipeline where each one handles a specific concern. Instead of one giant handler doing everything (auth + logging + validation + business logic), you compose small, reusable middleware functions. Each does one thing well and passes control to the next via next(). Cleaner and more testable.
Q22:

What are environment-specific configurations?

Mid

Answer

Different environments use different settings such as DB URLs or logging levels, managed via env variables or configs.
Quick Summary: Environment-specific config means different settings for dev, test, staging, and production — different database URLs, log levels, debug flags, API keys. Use process.env to inject these at runtime. Libraries like config or convict help structure multiple environments. Never commit secrets or production configs — use environment variables and secrets managers.
Q23:

What is a health-check endpoint?

Mid

Answer

A health-check endpoint responds with a status like OK to help monitoring tools verify app uptime.
Quick Summary: A health-check endpoint (usually GET /health or /ping) returns a quick status indicating whether the app is running correctly. Load balancers, Kubernetes readiness probes, and monitoring tools ping it periodically. It should check DB connectivity, cache availability, and critical dependencies — returning 200 if healthy, 500+ if not.
Q24:

What is the difference between access tokens and refresh tokens?

Mid

Answer

Access tokens are short-lived and used for API calls. Refresh tokens generate new access tokens without re-login.
Quick Summary: Access tokens are short-lived (15 minutes) and used to authenticate API requests. Refresh tokens are long-lived (days/weeks) and used only to get new access tokens when the old ones expire. This way, if an access token leaks, it expires quickly. Refresh tokens are stored more securely (httpOnly cookie) and can be revoked on the server.
Q25:

What is input validation and why is it critical?

Mid

Answer

Input validation ensures request data meets expected formats and prevents injection attacks using libraries like Joi or Zod.
Quick Summary: Input validation ensures data from the user matches expected formats before you process it — preventing SQL injection, XSS, type errors, and business logic bugs. Always validate on the server, even if you validate on the frontend too. Libraries like Joi, Zod, or express-validator make it easy to define schemas and validate req.body automatically.
Q26:

What is rate limiting in Node.js?

Mid

Answer

Rate limiting restricts excessive requests per user to prevent abuse, brute force, and server overload.
Quick Summary: Rate limiting restricts each client to N requests per time window. Without it, one bad actor can send millions of requests, overloading your server or triggering excessive costs. express-rate-limit lets you set limits by IP. For distributed systems (multiple Node instances), use a shared Redis store so limits are enforced across all instances.
Q27:

How do you manage logs across environments?

Mid

Answer

Structured logging formats like JSON are used with tools such as Winston or Pino for centralized log management.
Quick Summary: Use different log levels (debug, info, warn, error) and enable only the appropriate level per environment — verbose in dev, errors-only in production. Use structured JSON logging (Pino, Winston) for machine-readability. In production, ship logs to a centralized platform (Datadog, CloudWatch, Elastic) for search, alerting, and correlation across services.
Q28:

What is a caching layer and why is it important?

Mid

Answer

Caching stores frequently accessed data to reduce database load and improve response speed. Redis is commonly used.
Quick Summary: A caching layer stores frequently accessed data in fast storage (Redis, Memcached) instead of hitting the database every time. A user profile read 1000x/minute should come from Redis, not Postgres. This dramatically reduces database load, cuts response times from 100ms to 1ms, and lets your database handle write-heavy work it's designed for.
Q29:

What is graceful shutdown in Node.js?

Mid

Answer

Graceful shutdown stops accepting new requests, finishes in-flight work, and closes resources before exit.
Quick Summary: Graceful shutdown means when your Node.js process receives a SIGTERM (from a deployment or scaling event), it stops accepting new connections, finishes in-flight requests, closes database connections, and then exits cleanly. Without it, active requests are abruptly cut off. Use process.on('SIGTERM', () => server.close(done)) pattern with a timeout.
Q30:

How do you monitor Node.js apps in production?

Mid

Answer

Monitoring tools like PM2, New Relic, Datadog, or Prometheus track CPU, memory, errors, and endpoint performance.
Quick Summary: Monitor Node.js apps with: process metrics (CPU, memory heap usage via process.memoryUsage()), event loop lag, HTTP request rates and error rates, database query times. Use APM tools (Datadog, New Relic, Dynatrace) for automatic instrumentation, or Prometheus + Grafana for open-source monitoring. Set alerts on error rate spikes and latency percentiles.

Curated Sets for NodeJS

No curated sets yet. Group questions into collections from the admin panel to feature them here.

Ready to level up? Start Practice