Skip to main content

Amazon Interview NodeJS Interview Questions

Curated Amazon Interview-level NodeJS interview questions for developers targeting amazon interview positions. 150 questions available.

Last updated:

NodeJS Interview Questions & Answers

Skip to Questions

Welcome to our comprehensive collection of NodeJS interview questions and answers. This page contains expertly curated interview questions covering all aspects of NodeJS, from fundamental concepts to advanced topics. Whether you're preparing for an entry-level position or a senior role, you'll find questions tailored to your experience level.

Our NodeJS interview questions are designed to help you:

  • Understand core concepts and best practices in NodeJS
  • Prepare for technical interviews at all experience levels
  • Master both theoretical knowledge and practical application
  • Build confidence for your next NodeJS interview

Each question includes detailed answers and explanations to help you understand not just what the answer is, but why it's correct. We cover topics ranging from basic NodeJS concepts to advanced scenarios that you might encounter in senior-level interviews.

Use the filters below to find questions by difficulty level (Entry, Junior, Mid, Senior, Expert) or focus specifically on code challenges. Each question is carefully crafted to reflect real-world interview scenarios you'll encounter at top tech companies, startups, and MNCs.

Questions

150 questions
Q1:

What is Node.js?

Entry

Answer

Node.js is a runtime environment that allows JavaScript to run outside the browser, mainly used for backend development.
Quick Summary: Node.js is a JavaScript runtime built on Chrome's V8 engine that lets you run JavaScript on the server side — outside the browser. It's built around a non-blocking, event-driven architecture, which makes it great for building fast, scalable network applications like APIs, real-time apps, and microservices.
Q2:

Why was Node.js created?

Entry

Answer

Node.js was created to handle a large number of concurrent requests efficiently using a non-blocking architecture.
Quick Summary: Node.js was created by Ryan Dahl in 2009 to solve the problem of handling many simultaneous connections efficiently. Traditional servers (Apache) blocked on I/O — each request tied up a thread. Node uses an event-driven, non-blocking model so one thread handles thousands of concurrent I/O operations without waiting.
Q3:

What is the V8 engine in Node.js?

Entry

Answer

The V8 engine is Google’s JavaScript engine that compiles JavaScript into fast machine code, helping Node achieve high speed.
Quick Summary: V8 is Google's open-source JavaScript engine — the same one that runs JavaScript in Chrome. It compiles JavaScript directly to native machine code (JIT compilation) instead of interpreting it, making it very fast. Node.js uses V8 under the hood to execute your server-side JavaScript code.
Q4:

What is the event loop in Node.js?

Entry

Answer

The event loop manages asynchronous operations and ensures Node remains responsive without blocking.
Quick Summary: The event loop is the core of Node.js — it's what allows JavaScript to handle multiple operations without blocking. It continuously checks if there are pending callbacks to run (timers, I/O, network). When an async operation completes, the event loop picks up the callback and runs it. This is how Node handles many connections with one thread.
Q5:

What is single-threaded architecture?

Entry

Answer

Node uses one main thread to execute JavaScript while delegating heavy tasks to background workers.
Quick Summary: Single-threaded means Node.js runs JavaScript on one main thread — unlike traditional servers that create a new thread per request. Instead of threading, Node uses asynchronous callbacks to handle concurrency. The single thread is never blocked because I/O operations run in the background (via libuv) and notify the main thread when done.
Q6:

What is npm?

Entry

Answer

npm is Node’s package manager used to install and manage external libraries.
Quick Summary: npm (Node Package Manager) is the default package manager for Node.js. It lets you install, share, and manage JavaScript libraries (packages) that you use in your project. npm install adds a package; package.json tracks which packages your project depends on. The npm registry hosts over 2 million packages.
Q7:

What is a package.json file?

Entry

Answer

A metadata file that stores project details, dependencies, scripts, and configuration.
Quick Summary: package.json is the configuration file for your Node.js project. It tracks your project name, version, scripts (npm start, npm test), and dependencies — the packages your app needs. Anyone who clones your repo can run npm install and get all the same packages installed automatically.
Q8:

What are dependencies and devDependencies?

Entry

Answer

Dependencies are required at runtime; devDependencies are used only during development.
Quick Summary: dependencies are packages needed in production — express, mongoose, jsonwebtoken. devDependencies are only needed during development — jest, nodemon, eslint. When you deploy to production, you can run npm install --production to skip devDependencies and keep the deployment lean. Separating them keeps production clean.
Q9:

What is a callback function?

Entry

Answer

A callback is a function passed as an argument to execute after an asynchronous task finishes.
Quick Summary: A callback is a function you pass to another function to be called when an operation finishes. In Node.js: fs.readFile('file.txt', callback) — Node reads the file async and calls your callback when done. Callbacks are the original way to handle async operations before Promises and async/await made it cleaner.
Q10:

What is a module in Node.js?

Entry

Answer

A module is a separate file containing reusable code that can be imported into other files.
Quick Summary: A module is a reusable piece of code in its own file. Node uses the module system to keep code organized and prevent variable name collisions. You export what you want to share (module.exports) and import what you need (require()). Every file in Node.js is automatically its own module.
Q11:

What is CommonJS?

Entry

Answer

CommonJS is Node’s module system using module.exports and require for import and export.
Quick Summary: CommonJS (CJS) is the original module system in Node.js — it uses require() to import and module.exports to export. It loads modules synchronously, which is fine for server-side code. It's still the default in most Node.js projects, though ES Modules (import/export) are now also supported.
Q12:

What is the fs module used for?

Entry

Answer

The fs module provides APIs to create, read, write, and modify files.
Quick Summary: The fs module provides functions for working with the file system — reading files, writing files, creating directories, checking if a file exists. Node has both sync versions (fs.readFileSync — blocks until done) and async versions (fs.readFile — non-blocking with a callback). Always prefer async in production to avoid blocking the event loop.
Q13:

What is the path module used for?

Entry

Answer

The path module works with file and directory paths in a cross-platform way.
Quick Summary: The path module helps you work with file and directory paths in a cross-platform way. It handles differences between operating systems (Windows uses backslashes, Unix uses forward slashes). path.join(__dirname, 'public', 'index.html') correctly builds a path regardless of the OS.
Q14:

What is the os module?

Entry

Answer

The os module provides system-level information such as CPU details, memory, and operating system.
Quick Summary: The os module provides information about the operating system — CPU count, total memory, free memory, hostname, platform (win32, linux, darwin). Useful for building health checks, monitoring dashboards, or scaling logic that adapts based on available system resources.
Q15:

What is a buffer in Node.js?

Entry

Answer

A buffer stores binary data in memory, useful for streams, files, and network operations.
Quick Summary: A Buffer is a fixed-size chunk of raw binary data — like a byte array. Node.js uses Buffers when dealing with binary data: reading files, receiving network packets, processing images. Buffers exist outside the V8 heap and are necessary because JavaScript strings are UTF-16 encoded and can't efficiently represent raw binary.
Q16:

What is a stream in Node.js?

Entry

Answer

A stream handles data in chunks, enabling efficient reading and writing without loading full data.
Quick Summary: A stream is a sequence of data delivered in chunks over time instead of all at once. Node.js uses streams for reading large files, piping HTTP responses, and processing data without loading everything into memory. Types: Readable (data comes in), Writable (data goes out), Duplex (both), Transform (modifies data in transit).
Q17:

What is server-side JavaScript?

Entry

Answer

It refers to writing backend logic in JavaScript that runs on the server instead of the browser.
Quick Summary: Server-side JavaScript means running JavaScript code on the server (Node.js) instead of the browser. You use the same language for both frontend and backend, share code between them, and build full-stack apps with one language. The server handles requests, talks to databases, and sends back responses — all in JavaScript.
Q18:

What is the http module used for?

Entry

Answer

The http module allows creation of HTTP servers and handling incoming requests.
Quick Summary: The http module is Node's built-in way to create HTTP servers and make HTTP requests. http.createServer() creates a server that listens on a port and receives requests. It's the foundation that frameworks like Express are built on. You rarely use it directly — Express provides a cleaner API on top of it.
Q19:

What is middleware?

Entry

Answer

Middleware functions run between a request and response for tasks like parsing, logging, or validation.
Quick Summary: Middleware is a function that sits between the request and response in the request pipeline. It receives the request, can modify it, perform logic (authentication, logging, parsing), and either send a response or pass control to the next middleware. In Express: app.use((req, res, next) => { ... next(); }).
Q20:

What is Express.js?

Entry

Answer

Express.js is a lightweight web framework for Node that simplifies routing, middleware, and server creation.
Quick Summary: Express.js is a minimal web framework for Node.js that makes it easy to build APIs and web apps. It provides routing (match URLs to handler functions), middleware support (plug in authentication, logging, parsing), template rendering, and error handling — with far less boilerplate than raw Node.js http module.
Q21:

What is routing in Node and Express?

Entry

Answer

Routing determines what happens when a specific URL and HTTP method are accessed.
Quick Summary: Routing maps incoming HTTP requests (URL + method) to handler functions. In Express: app.get('/users', handler) handles GET requests to /users; app.post('/users', handler) handles POST. Routes let you organize your API into clean, meaningful endpoints. Express Router lets you group related routes into separate files.
Q22:

What is JSON in Node.js?

Entry

Answer

JSON is a lightweight data format used for configuration, APIs, and storing structured data.
Quick Summary: JSON (JavaScript Object Notation) is the standard data format for APIs. Node.js has JSON.parse() to convert a JSON string to a JavaScript object, and JSON.stringify() to convert an object to a JSON string. Express's res.json() automatically serializes your response object and sets the Content-Type header to application/json.
Q23:

What is an API endpoint?

Entry

Answer

An API endpoint is a URL where clients send requests and receive responses.
Quick Summary: An API endpoint is a specific URL that your server exposes for clients to interact with. GET /api/users returns the user list; POST /api/users creates a new user; DELETE /api/users/123 deletes user 123. Each endpoint maps to a handler function in your server code that performs the appropriate operation.
Q24:

What is an environment variable?

Entry

Answer

Environment variables store external configuration such as API keys or database URLs.
Quick Summary: Environment variables are key-value pairs set outside your code that configure your application — database URL, API keys, port numbers, environment name. They let the same code run differently in dev vs prod without changing the code. You access them in Node.js via process.env.MY_VARIABLE.
Q25:

What is process.env?

Entry

Answer

process.env is a built-in object providing access to environment variables.
Quick Summary: process.env is an object in Node.js that contains all environment variables set in the system or .env file. process.env.PORT gives you the port to listen on; process.env.DB_URL gives the database connection string. Never hardcode secrets in code — read them from process.env instead.
Q26:

What is nodemon?

Entry

Answer

Nodemon automatically restarts a Node application when file changes are detected.
Quick Summary: nodemon is a development tool that automatically restarts your Node.js server when you change a file. Instead of manually stopping and restarting after every code change, nodemon watches your files and does it for you. Install as devDependency and run: nodemon server.js. It's a standard part of the Node.js dev workflow.
Q27:

What is asynchronous programming in Node?

Entry

Answer

Asynchronous programming allows Node to run tasks without waiting, improving scalability.
Quick Summary: Asynchronous programming means starting an operation (file read, database query, API call) and continuing to do other work while waiting for it to finish. When the operation completes, the callback, Promise, or async/await resumes. This is how Node.js serves many requests simultaneously with one thread — nobody waits for nobody.
Q28:

What is synchronous programming?

Entry

Answer

Execution happens step-by-step, blocking the thread until each operation completes.
Quick Summary: Synchronous programming runs operations one at a time — each line waits for the previous to finish. Simple to reason about but inefficient for I/O: fs.readFileSync blocks the entire Node.js thread while reading, preventing it from handling other requests. Fine for scripts; avoid in web servers handling concurrent requests.
Q29:

What is an error-first callback?

Entry

Answer

A callback pattern where the first argument is the error and the second is the result.
Quick Summary: An error-first callback is Node.js's convention: the first parameter of the callback is always an error (null if no error, Error object if something went wrong). The second parameter is the result. Always check the error first: if (err) return handleError(err). This pattern predates Promises but is still common in many Node APIs.
Q30:

Difference between console.log and return?

Entry

Answer

console.log prints output to the console; return sends a value back from a function to its caller.
Quick Summary: console.log outputs something to stdout — it's a side effect for visibility/debugging. return ends the function and passes a value back to the caller. They serve completely different purposes: console.log is for printing; return is for passing data. A function can console.log and return at the same time.
Q31:

What is the difference between synchronous and asynchronous file operations in Node.js?

Junior

Answer

Synchronous operations block the event loop until the task completes. Asynchronous operations allow Node to continue handling other requests while the file task runs in the background. Async is preferred to avoid blocking.
Quick Summary: Sync file ops (readFileSync, writeFileSync) block the event loop — the entire Node.js process freezes until the file operation completes. No other requests can be handled during that time. Async file ops (readFile, writeFile) hand the work off to libuv, return immediately, and call your callback when done. In a web server, always use async.
Q32:

What is the role of the event emitter?

Junior

Answer

Event emitters allow modules to emit events and let other parts of the app listen and react. It supports decoupled communication between components.
Quick Summary: EventEmitter is Node.js's built-in pub-sub mechanism. Objects that extend EventEmitter can emit named events (emitter.emit('data', value)) and listeners can subscribe (emitter.on('data', handler)). Many Node.js core modules (streams, http server, child processes) are EventEmitters. It's the foundation of event-driven architecture in Node.
Q33:

How does Node.js handle concurrency if it is single-threaded?

Junior

Answer

Node uses the event loop and worker threads behind the scenes. The main thread handles requests while heavy operations run in worker threads.
Quick Summary: Node.js is single-threaded for JavaScript, but libuv (its async I/O library) uses a thread pool for blocking operations like file system and DNS. The event loop delegates I/O to libuv, stays free to handle other events, and picks up the callback when libuv finishes. So concurrency is achieved through async I/O, not multiple threads.
Q34:

What is middleware chaining in Express.js?

Junior

Answer

Middleware chaining passes control from one middleware to another using next(), building a request pipeline for logging, parsing, authentication, etc.
Quick Summary: Middleware chaining in Express means each middleware calls next() to pass control to the next one in line. The chain runs in the order you define them: logging → authentication → body parsing → route handler. If any middleware doesn't call next() or send a response, the request just hangs. Order matters significantly.
Q35:

What is CORS and why is it needed?

Junior

Answer

CORS is a browser security rule restricting cross-domain requests. Node must configure CORS to allow approved domains to access APIs.
Quick Summary: CORS (Cross-Origin Resource Sharing) is a browser security mechanism that blocks frontend code from making requests to a different domain. Your React app on localhost:3000 can't call your API on localhost:5000 without CORS headers. The server must send Access-Control-Allow-Origin headers to grant permission. The cors npm package makes this easy in Express.
Q36:

What is the difference between PUT and PATCH?

Junior

Answer

PUT replaces the entire resource, while PATCH updates only specific fields.
Quick Summary: PUT replaces the entire resource — you send the complete updated object and the server overwrites everything. PATCH partially updates — you send only the fields that changed. PUT is idempotent and clearer for full replacements; PATCH is more efficient when changing a single field in a large object.
Q37:

What is the purpose of Express.js router?

Junior

Answer

Routers organize routes into modules, improving structure and maintainability.
Quick Summary: Express Router lets you split routes into separate files and group related ones together. You create a router (const router = express.Router()), define routes on it, and mount it on a path (app.use('/api/users', userRouter)). Keeps your codebase organized — user routes in users.js, order routes in orders.js, etc.
Q38:

How does Node.js handle errors in asynchronous code?

Junior

Answer

Callbacks use error-first pattern. Async/await uses try/catch or .catch() with promises.
Quick Summary: In async code, errors propagate through Promise rejections and callback err parameters. In Express, passing an error to next(err) triggers error-handling middleware. Unhandled Promise rejections in older Node versions crashed the process silently — Node 15+ throws them as errors. Always use try/catch in async handlers and pass errors to next().
Q39:

What is a promise in Node.js?

Junior

Answer

A promise represents a future value and helps avoid callback nesting with cleaner chaining.
Quick Summary: A Promise is an object representing the eventual completion (or failure) of an async operation. Instead of nested callbacks (callback hell), you chain .then() and .catch(). A Promise is in one of three states: pending, fulfilled (resolved with a value), or rejected (rejected with an error). Cleaner than callbacks for async code.
Q40:

What is async/await and why is it useful?

Junior

Answer

async/await is syntax built on promises that makes asynchronous code look synchronous and easier to read.
Quick Summary: async/await is syntactic sugar over Promises that makes async code look like synchronous code. Mark a function async and use await inside it to pause execution until a Promise resolves. Errors throw naturally and are caught with try/catch. It dramatically improves readability compared to chained .then() calls.
Q41:

What is the purpose of the cluster module in Node.js?

Junior

Answer

Cluster creates multiple Node processes to use all CPU cores, improving performance.
Quick Summary: The cluster module lets you create multiple Node.js worker processes that all share the same server port. Since Node.js is single-threaded, cluster spawns one worker per CPU core — each worker handles its own event loop. The master process distributes incoming connections across workers. This is how Node.js uses all CPU cores.
Q42:

What is rate limiting in Node.js?

Junior

Answer

Rate limiting restricts how many requests a client can make in a given time to prevent abuse and overload.
Quick Summary: Rate limiting restricts how many requests a client can make in a given time window. Without it, a single client can spam your API with thousands of requests per second — slowing or crashing it. Use express-rate-limit to add: 100 requests per 15 minutes per IP. Essential for preventing abuse, DDoS, and brute-force attacks.
Q43:

What are HTTP status codes and why are they important?

Junior

Answer

Status codes indicate if a request succeeded or failed and help clients understand the response.
Quick Summary: HTTP status codes tell the client what happened: 200 (OK), 201 (Created), 400 (Bad Request — client error), 401 (Unauthorized), 403 (Forbidden), 404 (Not Found), 500 (Internal Server Error). Returning the right status code helps clients handle responses correctly and makes APIs self-documenting and debuggable.
Q44:

What is streaming in Node.js?

Junior

Answer

Streaming processes data in chunks instead of loading it fully in memory. Useful for large files and real-time data.
Quick Summary: Streaming sends data in chunks as it's produced instead of buffering the entire response. For large files or real-time data, streaming means the client starts receiving data immediately — no waiting. Node.js streams pipe data from source to destination: readableStream.pipe(response) sends a file to the client without loading it all into memory.
Q45:

What is a template engine in Node.js?

Junior

Answer

A template engine creates dynamic HTML using variables. Examples: EJS, Pug, Handlebars.
Quick Summary: A template engine (EJS, Handlebars, Pug) renders HTML on the server by combining a template with data. Instead of sending JSON to a React frontend, you render HTML directly on the server and send it. Useful for traditional multi-page apps or email generation. Less common now that SPAs and APIs dominate.
Q46:

What is JSON Web Token (JWT)?

Junior

Answer

JWT is a compact token format for authentication, storing encoded user info verified using a secret or key.
Quick Summary: JWT is a compact, self-contained token for authentication. It contains three parts (header.payload.signature) — the payload carries user data (id, role), and the signature verifies it wasn't tampered with. Server issues a JWT on login; the client sends it in the Authorization header on subsequent requests. No session storage needed.
Q47:

What is the difference between global and local npm installation?

Junior

Answer

Global installs work everywhere; local installs stay inside a project folder.
Quick Summary: Global installation (npm install -g) installs the package for use as a CLI command anywhere on your machine — like installing nodemon globally. Local installation (npm install) installs into node_modules in the current project. For libraries you import in code, always install locally — global packages aren't part of your project's dependency tree.
Q48:

What is nodemailer and where is it used?

Junior

Answer

Nodemailer sends emails from Node apps, such as welcome emails or password resets.
Quick Summary: nodemailer is a Node.js library for sending emails from your app — sign-up confirmations, password resets, notifications. Configure it with your email provider (SMTP, Gmail, SendGrid) and call transporter.sendMail() with the recipient, subject, and body. It's the go-to email solution for Node.js backends.
Q49:

What is dotenv and why is it used?

Junior

Answer

dotenv loads environment variables from a .env file, keeping secrets out of source code.
Quick Summary: dotenv loads environment variables from a .env file into process.env. Instead of setting environment variables in your OS or CI pipeline for local development, you create a .env file with DATABASE_URL=xxx, add it to .gitignore (never commit it), and dotenv makes them available as process.env.DATABASE_URL when the app starts.
Q50:

What is body parsing in Express.js?

Junior

Answer

Body parsing converts request payloads to usable objects. express.json() handles JSON bodies.
Quick Summary: HTTP requests from clients arrive with bodies (JSON, form data, raw text). By default, Express doesn't parse them — you must add middleware. express.json() parses JSON bodies; express.urlencoded() parses form data. Without body parsing, req.body is undefined. These are included in Express 4.16+ — no need for the separate body-parser package.
Q51:

What is morgan in Node.js?

Junior

Answer

Morgan is a logging middleware that records request details.
Quick Summary: morgan is an HTTP request logger middleware for Express. It logs each incoming request — method, URL, status code, response time. Useful in development (see every request instantly) and production (audit trail). Multiple formats available: combined (Apache-style), dev (colored, concise), json. Pipe logs to a file or log aggregation service.
Q52:

How does Node.js handle database connections?

Junior

Answer

Connections use drivers or ORMs like MongoDB driver, Mongoose, or Sequelize with pooling for performance.
Quick Summary: Node.js database connections are typically managed with a connection pool — a set of pre-established connections shared across requests. Creating a new connection per request is slow; pools reuse existing ones. Libraries like pg (PostgreSQL), mysql2, and Mongoose manage this automatically. Configure pool size based on your database server's limits.
Q53:

What is the difference between SQL and NoSQL databases?

Junior

Answer

SQL uses tables and schemas; NoSQL uses flexible JSON-like documents or key-value stores.
Quick Summary: SQL databases (PostgreSQL, MySQL) store data in structured tables with strict schemas and support complex joins and transactions — great for relational data. NoSQL databases (MongoDB, Redis) store data in flexible formats (documents, key-value) without strict schemas — great for unstructured or rapidly evolving data and horizontal scaling.
Q54:

What is a RESTful API?

Junior

Answer

A RESTful API uses HTTP verbs and structured endpoints to perform operations.
Quick Summary: A RESTful API uses HTTP methods and URLs to represent resources and actions. GET /users retrieves users; POST /users creates one; PUT /users/1 updates user 1; DELETE /users/1 deletes it. REST APIs are stateless — each request is self-contained, not dependent on previous requests. They're the standard for web APIs.
Q55:

Why is logging important in Node apps?

Junior

Answer

Logging helps track errors, performance, and user actions. Winston and Pino are common tools.
Quick Summary: Logs are your visibility into what's happening in production. When something goes wrong, logs tell you when, what, and why. Log request errors, unexpected exceptions, slow queries, and key business events. In production, use a structured logger (Winston, Pino) and ship logs to a centralized system (Datadog, CloudWatch, ELK) for searching and alerting.
Q56:

What is the purpose of versioning an API?

Junior

Answer

Versioning prevents breaking old clients when adding new features.
Quick Summary: API versioning lets you make breaking changes without breaking existing clients. Include the version in the URL (/api/v1/users, /api/v2/users) or as a header. Old clients continue using v1; new clients use v2 with the new contract. Without versioning, every breaking change forces all clients to update simultaneously — risky and disruptive.
Q57:

What is the difference between require and import?

Junior

Answer

require is CommonJS; import is ES modules. Both load modules but with different syntax rules.
Quick Summary: require() is CommonJS — synchronous, loads at runtime, works everywhere in Node.js. import/export is ES Modules — asynchronous, static (evaluated at parse time), required for tree-shaking. In Node.js, ESM requires .mjs extension or "type": "module" in package.json. CommonJS is still more common; ESM is the future standard.
Q58:

What is Express error-handling middleware?

Junior

Answer

A special middleware with four parameters used to catch and manage errors in Express apps.
Quick Summary: Error-handling middleware in Express has four parameters (err, req, res, next) — Express knows it's an error handler by the arity. Define it last, after all routes. When you call next(err) anywhere, Express skips to this handler. It catches any error passed to next() and lets you format and return consistent error responses.
Q59:

What are static files in Node.js?

Junior

Answer

Static files like images, CSS, or JS are served as-is without processing.
Quick Summary: Static files are files served directly as-is — HTML, CSS, JavaScript, images, fonts. No server processing needed. In Express, use express.static('public') to serve everything in the public folder. The server just reads the file and sends it. This is typically handled by a CDN or reverse proxy (nginx) in production for better performance.
Q60:

What is the purpose of package-lock.json?

Junior

Answer

package-lock.json stores exact dependency versions for consistent installations.
Quick Summary: package-lock.json records the exact version of every installed package and all their sub-dependencies. While package.json might say "express: ^4.18.0" (any compatible version), package-lock.json pins it to the exact 4.18.2 that was installed. This ensures everyone on the team and CI/CD gets identical dependency trees.
Q61:

How does the Node.js event loop differ from traditional multi-threaded servers?

Mid

Answer

Traditional servers create threads per request. Node.js uses a single-threaded event loop and offloads heavy tasks to worker threads, making it efficient for I/O-heavy workloads.
Quick Summary: Traditional multi-threaded servers (Java, .NET) create a new thread per request — threads are expensive (1–2MB stack each) and context-switching adds overhead. Node.js uses one thread and the event loop — instead of waiting, it registers callbacks and moves on. For I/O-heavy workloads, this handles far more concurrent requests with far less memory.
Q62:

What are microtasks and macrotasks in the Node.js event loop?

Mid

Answer

Microtasks include promise callbacks and queueMicrotask. Macrotasks include timers, I/O callbacks, and setImmediate. Microtasks run before macrotasks within each event loop cycle.
Quick Summary: Microtasks (Promise callbacks, queueMicrotask) run before the event loop moves to the next phase — they drain completely between each event loop tick. Macrotasks (setTimeout, setInterval, I/O callbacks) are picked up one at a time per event loop iteration. Microtasks have higher priority — a Promise chain can delay macrotasks if it keeps adding more microtasks.
Q63:

What is backpressure in Node.js streams?

Mid

Answer

Backpressure occurs when a data source produces data faster than the consumer can handle. Streams manage this using internal buffers and methods like pause() and resume().
Quick Summary: Backpressure happens when the consumer can't keep up with the producer. In Node streams, if you pipe a fast readable to a slow writable, the data buffers in memory. The readable should pause (readable.pause()) when the writable buffer fills up. Node streams handle this automatically when you use pipe() — otherwise you must manage it manually.
Q64:

How do worker threads improve performance in Node.js?

Mid

Answer

Worker threads run CPU-intensive tasks in parallel without blocking the main thread. They are ideal for cryptography, parsing, and other heavy computations.
Quick Summary: Worker threads run JavaScript in a separate V8 instance with its own event loop — in a real OS thread. Unlike cluster (separate processes), workers share memory (SharedArrayBuffer). Use them for CPU-heavy tasks (image processing, compression, heavy computation) that would block the event loop and hurt response times for all other requests.
Q65:

Why are synchronous functions discouraged in Node.js?

Mid

Answer

Synchronous functions block the event loop, delaying other requests and reducing server responsiveness.
Quick Summary: Synchronous functions block the event loop for their entire duration. While one request's sync operation runs (database query, file read, computation), all other incoming requests queue up and wait. In a web server handling hundreds of concurrent requests, a single 100ms sync operation can add 100ms latency to every queued request.
Q66:

What is middleware order in Express.js and why is it important?

Mid

Answer

Middleware runs in the order defined. Logging, authentication, and parsers must be placed correctly to ensure proper request handling.
Quick Summary: Express applies middleware in the exact order you call app.use(). If you define a route before the authentication middleware, unauthenticated requests reach that route. Body parser must run before routes that read req.body. Logger should be first. Error handler must be last. Order determines behavior — get it wrong and things break silently.
Q67:

What is a reverse proxy and why is Node.js used behind one?

Mid

Answer

Reverse proxies like Nginx handle SSL, caching, and load balancing. Node apps use them for better security and performance.
Quick Summary: A reverse proxy (nginx, Caddy) sits in front of Node.js and handles TLS termination, load balancing across multiple Node instances, serving static files, rate limiting, and HTTP/2. Node.js handles the application logic. This separation is standard practice — nginx is much better at serving static files and handling TLS than Node.js.
Q68:

What causes memory leaks in Node.js?

Mid

Answer

Memory leaks occur due to unused global variables, unremoved event listeners, unclosed timers, or improper caching.
Quick Summary: Memory leaks in Node.js happen when objects accumulate in memory and are never garbage collected because something still holds a reference. Common causes: closures capturing large objects, event listeners never removed (emitter.on without emitter.off), global caches growing without bounds, and circular references in certain situations.
Q69:

What are child processes in Node.js?

Mid

Answer

Child processes allow running OS-level processes. Useful for executing scripts, shell commands, or parallel computation.
Quick Summary: Child processes let you run external programs or additional Node.js scripts from your app — spawning a Python script, running a shell command, or offloading heavy computation. child_process.spawn() starts a process and streams its output; exec() runs a command and returns the full output as a string. Workers communicate via IPC messaging.
Q70:

What is the purpose of the crypto module in Node.js?

Mid

Answer

The crypto module provides hashing, encryption, decryption, signing, and random data generation for secure operations.
Quick Summary: The crypto module provides cryptographic functions — hashing (SHA-256, bcrypt via third-party), HMAC signatures, encryption/decryption (AES), generating random bytes, and creating digital signatures. Use it for password hashing, JWT signing verification, generating secure tokens, and encrypting sensitive data. Never roll your own crypto — use this.
Q71:

How does Express handle error propagation?

Mid

Answer

Calling next(error) skips normal middleware and invokes error-handling middleware for consistent error responses.
Quick Summary: In Express, if a route handler or middleware throws an error or calls next(err), Express automatically skips to the first error-handling middleware (one with 4 parameters: err, req, res, next). async handlers need try/catch and must pass errors to next() explicitly — Express doesn't automatically catch Promise rejections without a wrapper or asyncHandler utility.
Q72:

What is an HTTP agent in Node.js?

Mid

Answer

An HTTP agent manages and reuses TCP connections, improving the performance of outgoing HTTP requests.
Quick Summary: An HTTP Agent manages connection pooling for outgoing HTTP requests — it maintains a pool of keep-alive TCP connections to the same server instead of creating a new TCP connection per request. Reusing connections avoids the TCP and TLS handshake overhead. In production, configure maxSockets to tune how many connections your app maintains per host.
Q73:

What is the difference between setImmediate and setTimeout?

Mid

Answer

setImmediate runs after the poll phase. setTimeout runs after a delay. Their order may depend on event loop timing.
Quick Summary: Both schedule a callback, but at different event loop phases. setImmediate fires in the check phase — after I/O callbacks in the current iteration. setTimeout(fn, 0) fires in the timers phase of the next iteration. Inside an I/O callback, setImmediate always fires first. Outside I/O, the order is non-deterministic and depends on process performance.
Q74:

How does Node.js clustering differ from worker threads?

Mid

Answer

Clustering creates multiple Node processes sharing one port. Worker threads share memory inside one process. Clustering boosts request throughput; workers boost compute tasks.
Quick Summary: Clustering forks separate OS processes — each process has its own memory, event loop, and V8 instance. Worker threads run in one process with shared memory and their own V8 instances. Clustering is for scaling network request handling across CPU cores; worker threads are for offloading CPU-intensive work to avoid blocking the event loop.
Q75:

What is ESM support in Node.js?

Mid

Answer

ESM enables import/export syntax, better static analysis, and improved compatibility with modern JavaScript modules.
Quick Summary: Node.js supports ES Modules (ESM) natively since v12+. Use .mjs extension or add "type": "module" to package.json to use import/export syntax. ESM is statically analyzable (enables tree-shaking), asynchronous by design, and the standard going forward. The main friction: mixing ESM and CommonJS packages still requires care.
Q76:

What is the purpose of the package exports field?

Mid

Answer

The exports field controls which files of a package are accessible, helping secure internal modules and define public APIs.
Quick Summary: The exports field in package.json defines your package's public API — which files are accessible when someone imports your package and from which paths. It lets you have multiple entry points (CJS, ESM, browser builds), block access to internal files, and provide conditional exports based on the environment (node vs browser). It replaces the old "main" field.
Q77:

How does Node.js handle large JSON parsing?

Mid

Answer

Large JSON parsing blocks the event loop. Solutions include streaming parsers, chunk processing, or using worker threads.
Quick Summary: JSON.parse() is synchronous and blocks the event loop. For large JSON payloads (multi-MB), this blocking can noticeably delay other requests. Alternatives: stream-json to parse incrementally without blocking, or worker threads to parse in parallel without tying up the main thread. Keep JSON responses small; paginate or filter large data.
Q78:

What is an ORM in Node.js and why use one?

Mid

Answer

ORMs like Sequelize or TypeORM map database tables to objects, simplifying queries and enforcing consistency.
Quick Summary: An ORM (Object-Relational Mapper) lets you interact with your database using JavaScript objects instead of raw SQL queries. Mongoose is the ORM for MongoDB; Sequelize and Prisma are popular for SQL databases. Benefits: type safety, schema validation, simpler query syntax. Trade-off: can obscure performance issues from inefficient generated queries.
Q79:

How does Mongoose handle schema validation?

Mid

Answer

Mongoose validates data using schema rules such as types, required fields, enums, and custom validators.
Quick Summary: Mongoose schemas define what fields a document can have, their types, and validation rules (required, minLength, enum values). When you call save() or create(), Mongoose validates the data against the schema before inserting. Schema-level validators run in JavaScript — they don't replace database-level constraints but catch most common errors early.
Q80:

What is optimistic vs pessimistic locking?

Mid

Answer

Optimistic locking checks data versions and assumes few conflicts. Pessimistic locking blocks rows until transactions complete.
Quick Summary: Optimistic locking assumes conflicts are rare — reads happen without locks, and at write time you check if the record changed since you read it (via version field). If changed, retry. Pessimistic locking locks the record upfront, preventing others from modifying it until you're done. Optimistic is better for high-read, low-conflict workloads; pessimistic for frequent write conflicts.
Q81:

What is middleware composition?

Mid

Answer

Middleware composition chains multiple middleware functions into a reusable sequence for tasks like validation or authorization.
Quick Summary: Middleware composition is combining multiple middleware functions into a pipeline where each one handles a specific concern. Instead of one giant handler doing everything (auth + logging + validation + business logic), you compose small, reusable middleware functions. Each does one thing well and passes control to the next via next(). Cleaner and more testable.
Q82:

What are environment-specific configurations?

Mid

Answer

Different environments use different settings such as DB URLs or logging levels, managed via env variables or configs.
Quick Summary: Environment-specific config means different settings for dev, test, staging, and production — different database URLs, log levels, debug flags, API keys. Use process.env to inject these at runtime. Libraries like config or convict help structure multiple environments. Never commit secrets or production configs — use environment variables and secrets managers.
Q83:

What is a health-check endpoint?

Mid

Answer

A health-check endpoint responds with a status like OK to help monitoring tools verify app uptime.
Quick Summary: A health-check endpoint (usually GET /health or /ping) returns a quick status indicating whether the app is running correctly. Load balancers, Kubernetes readiness probes, and monitoring tools ping it periodically. It should check DB connectivity, cache availability, and critical dependencies — returning 200 if healthy, 500+ if not.
Q84:

What is the difference between access tokens and refresh tokens?

Mid

Answer

Access tokens are short-lived and used for API calls. Refresh tokens generate new access tokens without re-login.
Quick Summary: Access tokens are short-lived (15 minutes) and used to authenticate API requests. Refresh tokens are long-lived (days/weeks) and used only to get new access tokens when the old ones expire. This way, if an access token leaks, it expires quickly. Refresh tokens are stored more securely (httpOnly cookie) and can be revoked on the server.
Q85:

What is input validation and why is it critical?

Mid

Answer

Input validation ensures request data meets expected formats and prevents injection attacks using libraries like Joi or Zod.
Quick Summary: Input validation ensures data from the user matches expected formats before you process it — preventing SQL injection, XSS, type errors, and business logic bugs. Always validate on the server, even if you validate on the frontend too. Libraries like Joi, Zod, or express-validator make it easy to define schemas and validate req.body automatically.
Q86:

What is rate limiting in Node.js?

Mid

Answer

Rate limiting restricts excessive requests per user to prevent abuse, brute force, and server overload.
Quick Summary: Rate limiting restricts each client to N requests per time window. Without it, one bad actor can send millions of requests, overloading your server or triggering excessive costs. express-rate-limit lets you set limits by IP. For distributed systems (multiple Node instances), use a shared Redis store so limits are enforced across all instances.
Q87:

How do you manage logs across environments?

Mid

Answer

Structured logging formats like JSON are used with tools such as Winston or Pino for centralized log management.
Quick Summary: Use different log levels (debug, info, warn, error) and enable only the appropriate level per environment — verbose in dev, errors-only in production. Use structured JSON logging (Pino, Winston) for machine-readability. In production, ship logs to a centralized platform (Datadog, CloudWatch, Elastic) for search, alerting, and correlation across services.
Q88:

What is a caching layer and why is it important?

Mid

Answer

Caching stores frequently accessed data to reduce database load and improve response speed. Redis is commonly used.
Quick Summary: A caching layer stores frequently accessed data in fast storage (Redis, Memcached) instead of hitting the database every time. A user profile read 1000x/minute should come from Redis, not Postgres. This dramatically reduces database load, cuts response times from 100ms to 1ms, and lets your database handle write-heavy work it's designed for.
Q89:

What is graceful shutdown in Node.js?

Mid

Answer

Graceful shutdown stops accepting new requests, finishes in-flight work, and closes resources before exit.
Quick Summary: Graceful shutdown means when your Node.js process receives a SIGTERM (from a deployment or scaling event), it stops accepting new connections, finishes in-flight requests, closes database connections, and then exits cleanly. Without it, active requests are abruptly cut off. Use process.on('SIGTERM', () => server.close(done)) pattern with a timeout.
Q90:

How do you monitor Node.js apps in production?

Mid

Answer

Monitoring tools like PM2, New Relic, Datadog, or Prometheus track CPU, memory, errors, and endpoint performance.
Quick Summary: Monitor Node.js apps with: process metrics (CPU, memory heap usage via process.memoryUsage()), event loop lag, HTTP request rates and error rates, database query times. Use APM tools (Datadog, New Relic, Dynatrace) for automatic instrumentation, or Prometheus + Grafana for open-source monitoring. Set alerts on error rate spikes and latency percentiles.
Q91:

How does Node.js optimize performance under heavy concurrent load?

Senior

Answer

Node optimizes concurrency using the non-blocking event loop, internal thread pool, keep-alive agents, and efficient memory handling. Scaling further requires clustering, load balancers, event-driven design, and stream-based processing.
Quick Summary: Node.js handles concurrency through non-blocking async I/O, not threading. Keep I/O async (no sync calls), use connection pooling for databases, cache hot data in Redis, cluster across all CPU cores, and profile the event loop for blocking operations. For CPU-heavy work, offload to worker threads or external services.
Q92:

How do you design a scalable Node.js architecture for millions of requests?

Senior

Answer

Scale horizontally using clustering and multiple instances behind a reverse proxy. Use caching, message queues, sharding, CDN offloading, and stateless microservices to ensure modular and scalable design.
Quick Summary: Scalable Node.js architecture uses: horizontal scaling (multiple instances behind a load balancer), stateless design (sessions in Redis, not in-process), async I/O throughout, event-driven communication between services, caching at every layer, and a message queue (RabbitMQ, Kafka) for decoupled background processing.
Q93:

What is event loop starvation and how do you prevent it?

Senior

Answer

Starvation occurs when CPU-heavy tasks block the event loop. Detect via event loop delay, profiling, and async hooks. Prevent using worker threads or offloading computation to services.
Quick Summary: Event loop starvation happens when a CPU-intensive synchronous operation (heavy computation, large JSON parse, synchronous I/O) runs so long it prevents the event loop from processing other callbacks. The fix: break heavy CPU work into small chunks with setImmediate() between chunks, or move it entirely to a worker thread.
Q94:

How does Node.js handle high-throughput TCP or WebSocket apps?

Senior

Answer

Node maintains persistent connections, processes messages as streams, and uses buffers efficiently. Its event-driven sockets allow tens of thousands of connections with minimal overhead.
Quick Summary: For high-throughput TCP/WebSocket: use Node.js clustering (one worker per CPU core), tune the OS TCP backlog and socket buffer sizes, use binary protocols over JSON where possible, disable Nagle's algorithm (socket.setNoDelay(true)) for low-latency, and stream data instead of buffering. Libraries like uWebSockets.js are highly optimized for this.
Q95:

How do you design a fault-tolerant Node.js system?

Senior

Answer

Use supervisors like PM2 or Kubernetes, implement circuit breakers, retries, graceful degradation, dead-letter queues, and distributed load balancing for self-healing architectures.
Quick Summary: Fault-tolerant Node.js: use a process manager (PM2) to restart crashed processes automatically, implement circuit breakers for downstream services (opossum), handle Promise rejections and uncaught exceptions, design APIs to be idempotent, use retries with exponential backoff, and deploy across multiple instances so one crash doesn't cause downtime.
Q96:

How does garbage collection impact Node.js performance?

Senior

Answer

GC pauses can affect latency. Optimize by reducing object churn, using pooling, limiting memory consumption, and tuning V8 GC flags for performance.
Quick Summary: V8's garbage collector (GC) pauses JS execution to collect unreachable objects. Short pauses (minor GC for the young generation) are nearly imperceptible. Full GC (major/compaction) can pause for tens of milliseconds. In production, avoid creating excessive short-lived objects, pre-allocate large buffers, and monitor heap size to catch leaks before they cause GC pressure.
Q97:

What are advanced techniques for debugging memory leaks?

Senior

Answer

Use heap snapshots, memory flame graphs, async hooks, and allocation tracking. Identify retained objects, growing listeners, open timers, or closures causing leaks.
Quick Summary: Debug memory leaks with: process.memoryUsage() to track heap growth, heap snapshots (Chrome DevTools for Node via --inspect) to compare before/after snapshots and find what grew, leak detectors like clinic.js (from NearForm), and checking for common culprits — global variables accumulating data, event listeners never removed, unbounded caches.
Q98:

How does Node.js provide request isolation in a single thread?

Senior

Answer

Node isolates requests via async boundaries and execution contexts. For stricter isolation, use worker threads, VM sandboxes, or containerization.
Quick Summary: Node.js isolates requests through closures and the call stack — each async operation has its own execution context via callbacks or async/await. Node.js 16+ added AsyncLocalStorage (async_hooks) for propagating request-scoped context (like a request ID or user info) through async operations without passing it as a parameter everywhere.
Q99:

What is the role of async_hooks in Node internals?

Senior

Answer

async_hooks track async operation lifecycles. Used for logging, request tracing, correlation IDs, and context propagation.
Quick Summary: async_hooks is a Node.js module that tracks async resource lifecycle — creation, before callback, after callback, destroy. It's the foundation for AsyncLocalStorage. Internally, each async operation gets an async ID; async_hooks fires events at each lifecycle stage. Used by APM tools to trace request context across async boundaries automatically.
Q100:

How do you manage distributed transactions in Node.js?

Senior

Answer

Use saga patterns, event-based compensation, idempotent operations, and message queues. Avoid strict ACID across services.
Quick Summary: True distributed transactions across multiple services are hard — there's no two-phase commit in microservices. Use the Saga pattern: a sequence of local transactions with compensating transactions for rollback. Or the Transactional Outbox pattern: write to the database and an outbox table atomically; a separate process publishes the event. Avoid distributed transactions if possible.
Q101:

How do you build a high-performance streaming pipeline?

Senior

Answer

Use backpressure-aware pipelines, transform streams, efficient chunking, and the pipeline() API to safely chain streams.
Quick Summary: High-performance streaming: use Node.js Transform streams to process data chunk by chunk, pipe between stages (read → transform → write) without loading everything into memory, apply backpressure automatically via pipe(), use streams.pipeline() for proper error handling, and avoid converting streams to strings unnecessarily.
Q102:

How do you mitigate Node.js supply chain risks?

Senior

Answer

Mitigate risks using npm audit, dependency pinning, lockfile integrity, and security scanners like Snyk and OSSAR.
Quick Summary: Supply chain risks: malicious npm packages, typosquatting (attacking-express vs express), dependency confusion attacks. Mitigate with: lockfiles (package-lock.json), npm audit regularly, use npm audit signatures and provenance, pin exact versions for critical deps, use private registries for internal packages, and review dependencies with socket.dev or Snyk.
Q103:

What is zero-downtime deployment in Node.js?

Senior

Answer

Achieved using blue-green deployments, rolling updates, graceful connection draining, and orchestration tools like Kubernetes.
Quick Summary: Zero-downtime deployment: deploy new instances first, run readiness checks, shift traffic gradually (rolling or blue-green), drain old instances before terminating. With PM2: pm2 reload does rolling restart. With Kubernetes: rolling update deployment strategy. The key is overlapping old and new versions briefly during transition.
Q104:

How does Node.js handle TLS/SSL and its performance cost?

Senior

Answer

TLS operations are CPU-intensive. Node offloads crypto to worker threads. Performance impacted by cipher suites, key sizes, and session reuse.
Quick Summary: Node.js TLS uses OpenSSL under the hood. TLS handshakes are CPU-intensive — the first connection requires key exchange and certificate verification. Subsequent connections on the same keep-alive TCP connection reuse the TLS session, avoiding re-handshake. In production, terminate TLS at nginx or a load balancer to offload crypto work from Node.js.
Q105:

How do you design a plugin-based Node.js architecture?

Senior

Answer

Use dependency injection, configuration-driven module loading, versioned modules, and isolated service layers for modular design.
Quick Summary: Plugin-based architecture: define a plugin interface (a function that receives app context and registers routes/services), load plugins at startup from a directory or config, use dependency injection so plugins don't import each other directly. Fastify has a native plugin system with encapsulation. This keeps the core small and features composable.
Q106:

What are Unix domain sockets and why use them?

Senior

Answer

Domain sockets provide fast interprocess communication with lower latency than TCP. Often used between Nginx and Node.
Quick Summary: Unix domain sockets are IPC (Inter-Process Communication) channels using the local filesystem instead of TCP — no network stack overhead, no port allocation, faster than localhost TCP. Use them when Node.js and a database/redis run on the same machine, or when communicating between nginx and Node.js — measurably lower latency and higher throughput.
Q107:

What are advanced caching patterns in Node.js?

Senior

Answer

Use multi-layer caching including in-memory, Redis, CDN, and memoization. Apply TTL rules, stale-while-revalidate, and invalidation strategies.
Quick Summary: Advanced caching patterns: Cache-aside (app checks cache → miss → fetch from DB → store in cache), Write-through (write to cache and DB together), Write-behind (write to cache, async write to DB), Read-through (cache fetches from DB on miss automatically), Cache stampede prevention (probabilistic early expiration or locking). Match pattern to your read/write ratio.
Q108:

How do you prevent race conditions in Node.js?

Senior

Answer

Use atomic DB operations, distributed locks, mutexes, or job queues. Avoid shared mutable state in the event loop.
Quick Summary: Node.js is single-threaded so JS execution is sequential — no true data races within JS. Race conditions still occur across async operations: two requests read the same record simultaneously, both decide to update, the second overwrites the first. Fix with optimistic locking (check version before write), database-level transactions, or Redis SETNX for distributed locks.
Q109:

What is vertical vs horizontal scaling in Node.js?

Senior

Answer

Vertical scaling increases machine power; horizontal scaling adds more instances. Node typically scales horizontally.
Quick Summary: Vertical scaling: add more CPU/RAM to the server — limited by hardware ceiling. Horizontal scaling: add more Node.js instances behind a load balancer — scales nearly linearly and is cheaper. Node.js is designed for horizontal scaling — keep instances stateless (sessions in Redis), use a load balancer, and scale out to handle traffic growth.
Q110:

How do you tune a Node.js app for low latency?

Senior

Answer

Avoid blocking operations, reduce GC pauses, use streams, optimize DB queries, enable keep-alive, and minimize middleware overhead.
Quick Summary: Low latency tuning: use keep-alive connections (avoid TCP reconnects), disable Nagle's algorithm (socket.setNoDelay), use fast JSON parsers (fast-json-stringify, oj), avoid synchronous I/O, keep event loop free of heavy work (worker threads for CPU tasks), pre-warm the JIT (run warmup requests on startup), and profile with clinic.js flamegraphs.
Q111:

What are advanced routing patterns in Node frameworks?

Senior

Answer

Patterns include route grouping, prefixing, controller-based routing, lazy routes, and middleware stacking for large applications.
Quick Summary: Advanced routing: hierarchical routers (Router.use with sub-routers), dynamic route loading from directory structure, middleware composition per route group, route-level caching, parametric routes with validation (Fastify schema validation), and versioned routing (/api/v1, /api/v2) with automatic version detection from headers or URL prefix.
Q112:

How do you manage large-scale API versioning?

Senior

Answer

Use URI, header, or content negotiation versioning. Maintain separate controllers and implement deprecation workflows.
Quick Summary: Large-scale API versioning: URL prefix (/v1/, /v2/), header-based versioning (API-Version: 2), deprecation headers with sunset dates, automated versioning tools, keeping old versions alive behind feature flags, and a clear deprecation policy (announce → warn → sunset → remove). Document each version's lifecycle in your API changelog.
Q113:

What is distributed cache stampede and prevention methods?

Senior

Answer

Prevent stampede using locks, request coalescing, stale responses, or probabilistic expirations.
Quick Summary: Cache stampede (thundering herd): when a cache key expires and many requests simultaneously go to the database. Prevention: probabilistic early expiration (start refreshing before it expires), soft expiration (serve stale while refreshing in background), cache locking (only one request fetches; others wait for the lock), or pre-computation (scheduled refresh before expiry).
Q114:

How do you optimize Node.js for high memory workloads?

Senior

Answer

Use streaming, chunking, worker threads, pagination, simpler object structures, and buffers over large objects.
Quick Summary: For high-memory workloads: use streams instead of loading entire datasets into memory, tune V8 heap size (--max-old-space-size), pre-allocate Buffers and reuse them instead of allocating per request, avoid storing large objects in global caches without a size limit (use LRU-cache with a max size), and use native addons for memory-efficient data structures.
Q115:

What is the circuit breaker pattern in Node microservices?

Senior

Answer

Circuit breakers block calls to unstable services and provide fallbacks, preventing cascading failures.
Quick Summary: Circuit breaker pattern: wrap calls to external services in a circuit. After N consecutive failures, the circuit "opens" — subsequent calls fail immediately without hitting the service (giving it time to recover). After a timeout, the circuit goes to "half-open" — one test request is allowed. Success closes the circuit; failure keeps it open. Prevents cascade failures.
Q116:

How do you run background jobs reliably in Node.js?

Senior

Answer

Use queues like Bull or BeeQueue, or worker threads. Implement retries, DLQs, scheduling, and monitoring.
Quick Summary: Reliable background jobs: use a queue (Bull, BullMQ with Redis) to persist jobs — if the server crashes, jobs aren't lost. Add retries with exponential backoff for transient failures. Use job deduplication to prevent duplicate work. Monitor job queue depth as a health metric. For critical jobs, use at-least-once delivery with idempotent job handlers.
Q117:

What are idempotent APIs and why are they needed?

Senior

Answer

Idempotent APIs produce the same result regardless of repeated calls, crucial for retries and reliability.
Quick Summary: Idempotent APIs produce the same result whether called once or multiple times with the same input. Critical for retry logic — if a payment request times out and the client retries, you must not charge twice. Implement via idempotency keys (client sends a unique ID per operation; server deduplicates using it). PUT and DELETE are naturally idempotent; POST is not.
Q118:

How do you analyze CPU bottlenecks in Node.js?

Senior

Answer

Use CPU profiles, flame graphs, and V8 inspector to detect heavy loops, JSON parsing, regex usage, or encryption overheads.
Quick Summary: CPU bottleneck analysis: use clinic.js flamegraph to visualize where V8 spends time, or run with --prof and process the v8 log with node --prof-process. Common culprits: heavy serialization (JSON.stringify on huge objects), regex on large strings, synchronous crypto, and bcrypt hashing (use async version). Move CPU work to worker threads.
Q119:

How do you implement distributed tracing in Node.js?

Senior

Answer

Use correlation IDs, async_hooks-based context tracking, and tools like OpenTelemetry, Jaeger, or Zipkin.
Quick Summary: Distributed tracing tracks a request across multiple services. Use OpenTelemetry (OTel) — instrument Node.js with @opentelemetry/sdk-node, propagate trace context via HTTP headers (W3C Trace Context standard), and export spans to Jaeger, Zipkin, or Datadog. Each service adds a span to the trace — you see the full request path and timing across all services.
Q120:

What is graceful error recovery in Node.js?

Senior

Answer

Use retries, fallback logic, circuit breakers, timeouts, and structured error responses to maintain uptime during failures.
Quick Summary: Graceful error recovery: expect failures and handle them at every boundary. Use try/catch in async functions, domain-specific error classes for different failure types, circuit breakers for external service calls, retry with backoff for transient errors, and a global uncaughtException/unhandledRejection handler as a last resort (log, alert, restart cleanly).
Q121:

How would you redesign the Node.js event loop for ultra-low latency systems?

Expert

Answer

Minimize GC interruptions, offload CPU-heavy tasks to native modules, reduce promises, use real-time thread scheduling, pinned threads, predictable memory allocation, and remove unnecessary async layers.
Quick Summary: For ultra-low latency, the event loop polling mechanism (libuv epoll/kqueue) would need tuning — using busy-wait polling instead of sleep-based waiting, eliminating GC pauses via object pooling, pre-JIT-compiling hot code paths at startup, and pinning the Node.js process to a specific CPU core to avoid context-switching overhead.
Q122:

How does Node.js track async context at runtime and what are its limitations?

Expert

Answer

Node uses async_hooks to track async boundaries. Limitations include overhead, incomplete coverage for some libraries, and microtask behavior that complicates context propagation.
Quick Summary: Node.js tracks async context via async_hooks — each async operation gets an async resource with an ID. When a callback runs, Node restores the async context of its parent. AsyncLocalStorage uses this to propagate key-value stores through async chains automatically. Limitations: performance overhead in high-throughput paths, and not all async sources support it perfectly.
Q123:

How do you build lock-free concurrent algorithms in Node.js?

Expert

Answer

Use SharedArrayBuffer and Atomics with CAS operations to avoid locks. Carefully design non-blocking structures to prevent race conditions and ensure progress without mutual exclusion.
Quick Summary: True lock-free algorithms in Node.js: since JS is single-threaded, simple reads and writes within one thread are inherently atomic. For cross-worker coordination, use SharedArrayBuffer with Atomics (compareExchange, add, wait) — these are true hardware-level atomic operations. Implement lock-free queues using Atomics.compareExchange for the head/tail indices.
Q124:

How do you detect and repair event loop stalls in real-time systems?

Expert

Answer

Monitor event loop delay, profile CPU usage, examine GC pauses, inspect slow promise chains, and offload heavy tasks to workers. Use flame graphs to isolate the offending code.
Quick Summary: Event loop stalls show up as high event loop lag (measured via perf_hooks.monitorEventLoopDelay). Detect in real-time by periodically sampling the event loop delay and alerting when it exceeds a threshold. Fix by identifying and moving blocking code to worker threads. Use clinic.js or 0x for flamegraph analysis to find the specific blocking call.
Q125:

What is a Node.js native addon and when is it needed?

Expert

Answer

Native addons are C/C++ modules compiled for Node. They are used for CPU-heavy tasks such as encryption, compression, image processing, or low-level system access.
Quick Summary: Native addons are C/C++ modules compiled to .node files that Node.js loads like a regular module. They're needed when: you need access to OS-level APIs, you want to use an existing C/C++ library, or you need absolute maximum CPU performance for a hot code path. Use node-addon-api (N-API) for a stable, version-independent C++ API.
Q126:

How do you optimize Node.js for strict memory predictability?

Expert

Answer

Avoid dynamic structures, preallocate buffers, reduce closures, use object pools, minimize large allocations, and tune V8 heap settings to prevent unpredictable GC behavior.
Quick Summary: For memory predictability: pre-allocate large buffers at startup and reuse them (Buffer Pool pattern), use typed arrays (Uint8Array, Float64Array) for fixed-size numeric data (avoids GC), set max heap size explicitly (--max-old-space-size), monitor heap usage and trigger controlled cleanup before GC pressure causes latency spikes.
Q127:

How does V8 handle hidden classes and why can misuse hurt performance?

Expert

Answer

V8 creates hidden classes for optimized object access. Changing object shape dynamically causes deoptimization, harming performance. Keep object structures consistent.
Quick Summary: V8 assigns a hidden class to each object based on its shape (property names and order). Objects with the same shape share a hidden class and get optimized machine code together. If you add properties in different orders or add properties after creation, V8 assigns a new hidden class — deoptimizing those objects. Always initialize all properties in the constructor and in the same order.
Q128:

How can you build a custom garbage collection strategy with Node + native code?

Expert

Answer

Store large memory in native space and manually manage it. Native modules expose custom allocators, bypassing V8 GC and enabling predictable memory lifecycles.
Quick Summary: V8 doesn't expose GC control directly. For custom strategies: use weak references (WeakRef, FinalizationRegistry) to hold objects without preventing GC, implement object pooling in C++ native addons with explicit memory management, or use node --expose-gc (dev only) to call global.gc() at strategic points. In practice, tune GC via heap sizing flags rather than controlling it directly.
Q129:

What is a tick-based starvation pattern and how to prevent it?

Expert

Answer

Starvation occurs when microtasks continuously queue themselves. Prevent via batching, inserting setImmediate breaks, or rearchitecting promise recursion.
Quick Summary: Tick-based starvation: a loop that adds microtasks (Promise callbacks) faster than they run can prevent the event loop from ever reaching I/O or timer callbacks. Example: a Promise chain that resolves and immediately creates another Promise. Fix by breaking the loop with setImmediate() to yield to the event loop between iterations.
Q130:

How do you design a fully distributed Node.js system with no single points of failure?

Expert

Answer

Use multi-zone deployment, redundant brokers, replicated DBs, load balancing, leader election, and self-healing mechanisms across all services.
Quick Summary: Fully distributed Node.js: no single server, stateless services (all state in Redis/DB), message queues (Kafka) for async communication, service discovery (Consul), distributed config (etcd), circuit breakers between services, health checks and auto-restart at every level, multi-region deployment with geo-routing, and chaos engineering to verify fault tolerance.
Q131:

How do you implement a custom scheduler inside a Node.js application?

Expert

Answer

Use worker threads or child processes with priority queues. Employ cooperative multitasking using event loop checkpoints or manual yielding.
Quick Summary: A custom scheduler inside Node.js: maintain a priority queue of tasks, use setImmediate() to yield between batches, implement time-slicing (run each task for max N ms, then yield), track task deadlines and deprioritize overdue ones. This is useful for multi-tenant systems where you need fair task scheduling across tenants without blocking one tenant's work.
Q132:

How does cluster.loadBalance work internally?

Expert

Answer

The master process distributes incoming connections to workers. Uses round-robin on Linux and OS-level scheduling on Windows.
Quick Summary: cluster.loadBalance isn't a built-in API — cluster module uses round-robin (default on Linux/Mac) or OS-level distribution (on Windows) to assign connections to workers. Internally, the master process receives all TCP connections and passes them to worker processes via IPC. Each worker is an independent process with its own event loop.
Q133:

How do you process millions of messages per second in Node.js?

Expert

Answer

Use zero-copy buffers, binary protocols, Kafka/Redis pipelines, cluster workers, streaming APIs, domain sockets, and horizontal scaling.
Quick Summary: Millions of messages per second in Node.js: use binary serialization (protobuf, msgpack) instead of JSON, use native bindings for the messaging layer (e.g., node-rdkafka for Kafka), batch messages before processing, use worker threads for decode/encode operations, keep the event loop free of heavy processing, and benchmark with realistic payloads.
Q134:

Difference between async resource vs async operation in Node internals?

Expert

Answer

Async resources track handles like sockets or timers. Async operations are actions performed by these resources. Distinguishing them aids context tracking.
Quick Summary: An async resource (in async_hooks terms) is the object that represents the context of an async operation — it's created when you initiate an async task and tracks its lifecycle (init, before, after, destroy). An async operation is the actual work happening asynchronously (I/O read, timer, Promise resolution). The resource wraps the operation for tracking.
Q135:

How do you build a custom transport protocol in Node.js?

Expert

Answer

Use raw TCP/UDP sockets, define packet frames, implement chunking, ACK logic, retries, and error correction. Avoid HTTP overhead in ultra-low latency scenarios.
Quick Summary: Build a custom transport protocol in Node.js: use net.createServer() for raw TCP, define a frame format (length-prefixed or delimiter-based), implement a Transform stream to parse frames from the raw byte stream, handle partial frames and buffering, add a connection state machine for handshake/session/teardown. Test with high message rates and lossy network simulation.
Q136:

Difference between cooperative and preemptive scheduling in Node workers?

Expert

Answer

Workers use cooperative scheduling, requiring manual yielding for fairness. Preemptive scheduling would interrupt tasks at system level, not supported by Node.
Quick Summary: Node.js uses cooperative scheduling — JS code must explicitly yield (await, callback) for other work to run. Worker threads use OS preemptive scheduling — the OS can context-switch threads at any time. In the main thread, long-running JS can starve everything. Worker threads can't starve the main thread, but they compete for CPU via OS scheduling.
Q137:

How do you debug memory leaks in distributed Node.js environments?

Expert

Answer

Capture remote heap snapshots, compare over time, inspect retained objects, trace async refs, and correlate memory growth with workload patterns.
Quick Summary: Distributed memory leak debugging: use consistent heap snapshot collection across all instances (triggered by a management endpoint), ship heap stats to a central monitoring system, compare heap growth rates across instances to isolate if the leak is instance-specific or global, and use clinic.js heapprofiler with production traffic sampling.
Q138:

What is the role of libuv and how does it work with V8?

Expert

Answer

libuv manages I/O, timers, and thread pools. V8 executes JS and handles memory. Both coordinate via event loop phases to run async operations.
Quick Summary: libuv is the C library that provides Node.js's event loop, async I/O, thread pool, and timer management. It handles file system, DNS, networking, and child processes asynchronously across all platforms. V8 executes JavaScript. libuv handles everything outside V8. When you await a file read, V8 suspends the coroutine and libuv's thread pool does the actual disk I/O.
Q139:

How do you ensure strong consistency in distributed Node.js systems?

Expert

Answer

Use quorum reads/writes, distributed locks, idempotent updates, ordered logs, and consensus protocols such as Raft.
Quick Summary: Strong consistency in distributed Node.js: use a single-leader database (PostgreSQL) for writes with read replicas, implement optimistic concurrency (version checks before writes), use distributed transactions carefully via sagas, leverage Redis SET with NX+EX for distributed locks, and design idempotent operations so retries are safe.
Q140:

How do you design multi-region failover for Node.js APIs?

Expert

Answer

Use global load balancers, DNS routing, replicated DBs, stateless JWT sessions, and latency-based region switching with automated failover.
Quick Summary: Multi-region failover: deploy Node.js instances in multiple regions, use global DNS routing (Route 53 latency-based or health-check failover), keep databases replicated cross-region (Aurora Global Database, CockroachDB), use a CDN for static content, and ensure sessions/state are stored in a globally accessible store (Redis Enterprise, DynamoDB global tables).
Q141:

How do you optimize Node.js for massive file ingestion?

Expert

Answer

Use streams, chunk processing, backpressure handling, zero-copy transfers, and minimal memory buffering for GB/TB-scale ingestion.
Quick Summary: Massive file ingestion optimization: stream files directly without buffering (fs.createReadStream + pipeline), process in chunks, use worker threads for CPU-intensive parsing, limit concurrency to avoid overwhelming I/O and memory, use back-pressure to control ingestion rate, and write to the database in batches rather than per-record inserts.
Q142:

Explain nested microtask queue behavior during promise chains.

Expert

Answer

Nested promises can monopolize the microtask queue, blocking I/O. Break chains with setImmediate or timers to restore fairness.
Quick Summary: Each Promise.then() creates a microtask. When that microtask runs, if it returns another Promise, the resolution of that inner Promise schedules another microtask — before any macrotask (I/O, setTimeout) runs. Deep promise chains with synchronous inner Promises can fill the microtask queue deeply, delaying I/O callbacks noticeably.
Q143:

How do you implement distributed rate limiting in Node.js?

Expert

Answer

Use Redis or a distributed store for counters. Implement token bucket, sliding window, or leaky bucket algorithms across nodes.
Quick Summary: Distributed rate limiting across multiple Node.js instances requires a shared counter store. Use Redis with atomic increment (INCR + EXPIRE) or Redis sorted sets for sliding window rate limiting. All Node.js instances read/write the same Redis counter, so limits are enforced consistently regardless of which instance handles the request.
Q144:

What are Node.js limitations in CPU-bound workloads?

Expert

Answer

Worker threads still have message-passing overhead, share memory constraints, and GC pauses. Compiled languages outperform Node for heavy CPU tasks.
Quick Summary: Node.js is single-threaded — CPU-intensive operations (image processing, ML inference, complex computation) block the event loop and freeze all concurrent requests. Worker threads help but don't change the fundamental single-process architecture. For heavy CPU workloads, Node.js should delegate to specialized services (Python, Go, Rust) or use native addons.
Q145:

How do you implement the transactional outbox pattern in Node.js?

Expert

Answer

Write outgoing events inside the same DB transaction, then a background worker reads and publishes them reliably to avoid lost messages.
Quick Summary: Transactional outbox: within a database transaction, write both the business data and an outbox message record atomically. A separate relay process polls the outbox table and publishes messages to Kafka/RabbitMQ, then marks them as sent. This guarantees exactly-once message delivery aligned with database writes — no lost messages even on crash.
Q146:

What is TCP head-of-line blocking and how do you fix it?

Expert

Answer

Head-of-line blocking occurs when a lost TCP packet halts subsequent packets. Mitigate using HTTP/2 multiplexing, UDP, or splitting connections.
Quick Summary: TCP head-of-line blocking: HTTP/1.1 processes requests in order on one connection — a slow response blocks all subsequent ones on that connection. Fix: use HTTP/2 (multiplexes multiple streams on one connection with independent flow control), increase connection concurrency in HTTP agents, or use request pipelining carefully. gRPC (HTTP/2) eliminates this for internal services.
Q147:

How do you design CQRS architecture with Node.js?

Expert

Answer

Split read/write models, use event sourcing for writes, maintain async read models, and use message brokers for decoupling services.
Quick Summary: CQRS (Command Query Responsibility Segregation): separate write (command) and read (query) models. Commands go to write services that update the database; they publish events (Kafka, RabbitMQ). Read services consume events and maintain denormalized read models (Redis, Elasticsearch) optimized for queries. Node.js works well for both sides with async event processing.
Q148:

How do you isolate workloads in multi-tenant Node.js systems?

Expert

Answer

Use worker pools, sandboxed VM contexts, strict DB row-level separation, per-tenant caching, and request rate caps.
Quick Summary: Multi-tenant workload isolation in Node.js: use AsyncLocalStorage to propagate tenant context through async operations, enforce per-tenant rate limits and resource quotas, isolate tenant data at the database level (row-level security or separate schemas), use worker threads for CPU-intensive per-tenant work, and monitor per-tenant latency and error rates separately.
Q149:

How do you execute canary deployments for Node.js apps?

Expert

Answer

Release new versions to a small traffic subset, monitor metrics, then expand rollout. Rollback instantly if issues arise.
Quick Summary: Canary deployment: route a small percentage (1-5%) of traffic to the new Node.js version using weighted load balancing (nginx, ALB, or a service mesh). Monitor error rates, latency, and business metrics for the canary. Gradually increase traffic if stable; roll back immediately if metrics degrade. Canary reduces blast radius of bad deployments.
Q150:

Difference between backpressure and head-pressure in distributed systems?

Expert

Answer

Backpressure is local flow control on streams. Head-pressure is system-wide congestion caused by downstream overload. Mitigate via queues, throttling, and load shedding.
Quick Summary: Backpressure is when a downstream consumer signals upstream to slow down — "stop sending, I'm full." Head-pressure is upstream pressure — a producer slows sending because it knows the consumer is slow (proactive, not reactive). In distributed systems, backpressure is reactive (consumer signals); flow control via window sizes is more like head-pressure (proactive limits).

Curated Sets for NodeJS

No curated sets yet. Group questions into collections from the admin panel to feature them here.

Ready to level up? Start Practice