Nodejs Interview Questions Event Loop Scaling 2025 Interview Questions & Answers
35 questions available
Mid
Answer
Node.js uses the event loop + libuv threadpool.
The main thread handles:
Event loop orchestration
Callback scheduling
Microtasks and timers
I/O is delegated to:
OS kernel (epoll/kqueue/IOCP)
libuv threadpool (4 threads by default)
This non-blocking architecture allows massive concurrency without thread explosion.
Mid
Answer
The event loop cycles through 6 phases:
Timers (setTimeout, setInterval)
Pending Callbacks (system-level callbacks)
Idle/Prepare (internal use)
Poll (I/O events)
Check (setImmediate)
Close Callbacks
Microtasks (process.nextTick, Promises) run after every phase, before returning control.
Mid
Answer
nextTick runs before I/O and before Promises.
If you recursively schedule nextTick(), it blocks:
I/O
Timers
Event loop
Causing starvation.
Mid
Answer
Microtasks:
Promises
process.nextTick()
Queue flushed after each event loop phase
Macrotasks:
Timers
setImmediate
I/O callbacks
Processed one phase at a time.
Mid
Answer
When tasks require CPU or blocking operations:
fs.readFile (non-O_DIRECT)
crypto (pbkdf2, scrypt)
compression
DNS resolution
zlib operations
Threadpool size:
UV_THREADPOOL_SIZE=8 or custom up to 128.
Mid
Answer
Backpressure occurs when writeable streams cannot consume data fast enough.
Example:
Reading a 5GB file ? sending to HTTP client.
If client is slow ? Node signals write() ? false.
Readable stream must pause.
Built-in via:
pipe()
pipeline()
Mid
Answer
Only the JavaScript execution is single-threaded.
But Node internally uses:
Event loop thread
I/O worker threads
Cluster (multi-process)
Worker threads (multi-core JS execution)
Mid
Answer
Cluster:
Multi-process
Heavy isolation
Best for HTTP servers
Each process has its own event loop
Worker Threads:
Same process
Shared memory
Best for CPU-intensive tasks
Avoids IPC overhead
Mid
Answer
Node 20+ introduces:
Faster promise execution (V8 TurboFan improvements)
Low-overhead async context tracking
Web standard fetch implementation
Removes many process.nextTick hacks internally.
This drastically improves async server performance.
Mid
Answer
JSON.parse() is CPU-bound.
Runs on main event-loop thread ? blocks:
Request handling
Timers
Socket reads
Solution:
Worker threads
Streaming JSON parser
Binary formats (BSON, MessagePack)
Mid
Answer
Common leak sources:
Global variables
Event listeners not removed
Caches without TTL
Closure capturing large objects
Timers/Intervals not cleared
Use:
node --inspect
Chrome DevTools Heap Snapshot
Mid
Answer
V8 uses generational GC:
New Space ? Scavenge (fast copy collector)
Old Space ? Mark-Sweep + Mark-Compact
Incremental GC (breaks work into slices)
Large heaps cause GC pauses ? lag spikes.
Mid
Answer
spawn
Streams output
Good for large data
exec
Buffers entire output ? memory heavy
fork
Creates new Node.js worker process
IPC enabled
Mid
Answer
Because it serializes operations:
for (...) {
await something()
}
Better:
await Promise.all()
Mid
Answer
Before Node 15:
Silent
After Node 15:
Emits unhandledRejection
Crashes if enabled in strict mode
Mid
Answer
Crypto operations
Heavy sorting/filtering
PDF/image generation
Regex catastrophic backtracking
Large loops
JSON serialization/parsing
Mid
Answer
Uses OpenSSL bindings:
Key exchange
Encryption/decryption handled in C++
Handshake is blocking ? stored in libuv threadpool
Mid
Answer
setTimeout(fn,0) runs in timers phase
setImmediate(fn) runs in check phase
setImmediate usually fires earlier after I/O.
Mid
Answer
If microtasks (Promises), timers, or nextTick flood the cycle.
Poll never gets I/O callbacks ? server appears frozen.
Mid
Answer
Worker threads
Cluster mode
Load balancing
Redis queue + background workers
Move CPU tasks to Rust/WASM native modules
Mid
Answer
Uses event loop + non-blocking socket I/O
Each socket registers event listeners
Heartbeat (ping/pong) handled by library (ws/socket.io)
Mid
Answer
It loads file into RAM.
Instead use streaming:
fs.createReadStream()
Mid
Answer
Node inserts a checkpoint between phases to:
Flush promise callbacks
Process nextTicks
This ensures microtasks always run before next event.
Mid
Answer
JavaScript code cannot create locks/mutexes by default.
Event-loop architecture inherently avoids deadlocks unless:
Blocking loops
Native addon locks threadpool
Mid
Answer
Because fetch, streams, and large requests may need cancellation.
Built into Node 18+.
Mid
Answer
To solve limitations:
Single-threaded caching
Session management
Pub/Sub events
Queue processing
Redis fits Node's real-time strengths.
Mid
Answer
Global:
Runs before every route.
Route-level:
Only for specific route.
Execution order highly influences performance.
Mid
Answer
Express:
Heavier middleware
Complex router
Synchronous path matching
No schema validation
Fastify:
Full async
JSON schema compilation
Faster router (trie-based)
Mid
Answer
Not closing file handles
Too many concurrent fs operations
HTTP keep-alive connections
ulimit too small
Fix:
ulimit -n 65535
Mid
Answer
Historical C-style API design:
callback(err, data)
Ensures errors propagate synchronously.
Still used in legacy code.
Mid
Answer
Use:
blocked-at module
clinic doctor
process.hrtime.bigint() delta
PM2 event-loop delay metric
Mid
Answer
Native Addons:
Tied to V8 version
Break on updates
N-API:
ABI-stable layer
Works across Node versions
Best for long-term modules
Mid
Answer
Used for:
Fast communication between worker threads
Avoids slow JSON.stringify IPC
SharedArrayBuffers + Atomics
Mid
Answer
Because:
It pauses module loading
Blocks entire dependency tree
Delays server startup
Mid
Answer
Heavy JSON workloads
CPU-bound encryption
Massive microtask queues
Blocking libraries
Slow regex patterns
High memory churn
Unoptimized event emitters