Sql Server Interview Questions Deadlock Indexing 2025 Interview Questions & Answers
30 questions available
Mid
Answer
Blocking = one session waiting on a resource.
Deadlock = circular wait (A waits for B, B waits for A).
SQL Server detects deadlocks using the lock monitor and kills the victim.
Mid
Answer
Occurs when bookmark lookup is needed to fetch missing columns.
Eliminate by creating a covering index with INCLUDE columns.
Mid
Answer
Because SQL Server must lock the index row and the base table row.
Different access paths cause circular dependencies ? deadlocks.
Mid
Answer
Update lock prevents deadlocks by acquiring a U lock first, then converting to X.
Two sessions cannot both hold X, but they CAN both request U, avoiding conflict.
Mid
Answer
NOLOCK scans do not acquire shared locks but still get blocked by schema modification locks.
Also cause lock escalation indirectly due to longer scans.
Mid
Answer
Lock = logical protection for rows/pages.
Latch = physical protection for in-memory structures like buffer pages.
Latch contention = internal engine bottleneck.
Mid
Answer
Readers use version store in tempdb instead of locking ? no shared locks ? fewer deadlocks.
Mid
Answer
It stores row versions in tempdb.
High write workloads ? tempdb I/O pressure ? version cleanup delays.
Mid
Answer
Deleted rows become ghosted and cleaned asynchronously.
Slow cleanup causes index bloat, fragmentation, and lock waits.
Mid
Answer
Seek = targeted lookup
Scan = entire range/table
SARGable = predicate can be used for seeks.
Non-SARGable examples: WHERE CONVERT(...) or WHERE SUBSTRING(...).
Mid
Answer
Page splits occur when inserting into a full page in a non-sequential index.
Fix:
Lower fill factor
Use sequential keys (IDENTITY / NEWSEQUENTIALID)
Mid
Answer
Random insert pattern ? fragmentation ? page splits ? hot latch contention.
Mid
Answer
SQL caches a plan based on first parameter.
Fixes:
OPTION (RECOMPILE)
Optimize for hint
Use local variables
Query Store automatic tuning
Mid
Answer
It compares CPU/Duration of plans and forces the better plan automatically.
Mid
Answer
SQL Server writes log records to the transaction log before modifying data pages.
Ensures durability and crash recovery.
Mid
Answer
Pre SQL Server 2019: scalar UDF = row-by-row execution.
SQL 2019+ inlines UDF inside query for set-based execution.
Mid
Answer
Occurs when memory grant is insufficient ? spill to tempdb ? heavy I/O ? slow query.
Mid
Answer
Forwarded record = heap update that moves row ? pointer stored in original location.
Row overflow = variable-length columns stored off-row due to exceeding 8KB.
Mid
Answer
Heaps suffer from:
forwarded records
random access
no guaranteed order
Clustered indexes provide stable structure.
Mid
Answer
When SQL Server exceeds lock thresholds (~5000 locks) ? escalates row/page locks to table lock ? blocking.
Mid
Answer
Smaller index size ? fewer pages ? fewer locks ? fewer conflicting write paths.
Filtered indexes also allow highly selective optimizations for partial data sets.
Mid
Answer
Each index requires:
extra write
extra latch
extra log record.
More NC indexes = slower writes and higher contention.
Mid
Answer
SQL Server picks the session with the lowest rollback cost.
Configurable using:
SET DEADLOCK_PRIORITY LOW | NORMAL | HIGH.
Mid
Answer
Writers still take locks, but readers now compete for version cleanup.
High version store pressure ? tempdb allocation contention ? indirect deadlocks.
Mid
Answer
RID = row in heap
Key Lock = row in index
Page Lock = entire 8KB page
HoBT = index partition or B-tree substructure.
Mid
Answer
Bookmark lookup locks the index row (shared) and base row (exclusive).
Another session can lock these in opposite order ? circular deadlock.
Mid
Answer
MERGE accesses source and target tables in unpredictable order.
Competing locks ? higher deadlock probability vs separate operations.
Mid
Answer
Operations target individual partitions.
Reduces hot spots, lock contention, and access collisions.
Mid
Answer
Optimizer suggests missing index for a single query.
Adding blindly causes:
duplicate indexes
high write overhead
plan regression.
Always evaluate before creating.
Mid
Answer
A BW-Tree supports lock-free/latch-free concurrency.
Used in SQL Server Hekaton (In-Memory OLTP) for extremely high throughput.