Skip to main content

Senior MS SQL Interview Questions

Curated Senior-level MS SQL interview questions for developers targeting senior positions. 40 questions available.

Last updated:

MS SQL Interview Questions & Answers

Skip to Questions

Welcome to our comprehensive collection of MS SQL interview questions and answers. This page contains expertly curated interview questions covering all aspects of MS SQL, from fundamental concepts to advanced topics. Whether you're preparing for an entry-level position or a senior role, you'll find questions tailored to your experience level.

Our MS SQL interview questions are designed to help you:

  • Understand core concepts and best practices in MS SQL
  • Prepare for technical interviews at all experience levels
  • Master both theoretical knowledge and practical application
  • Build confidence for your next MS SQL interview

Each question includes detailed answers and explanations to help you understand not just what the answer is, but why it's correct. We cover topics ranging from basic MS SQL concepts to advanced scenarios that you might encounter in senior-level interviews.

Use the filters below to find questions by difficulty level (Entry, Junior, Mid, Senior, Expert) or focus specifically on code challenges. Each question is carefully crafted to reflect real-world interview scenarios you'll encounter at top tech companies, startups, and MNCs.

Questions

40 questions
Q1:

What are Stored Procedures and why are they preferred over sending raw SQL from applications?

Senior

Answer

Stored procedures are precompiled program units that live inside SQL Server. Instead of sending raw SQL text for every request, the application sends only the procedure name and parameters. SQL Server then executes pre-validated logic using a cached execution plan.

Key advantages over raw ad-hoc SQL:

  • Reduced network traffic: Only parameter values are sent, not large query strings.
  • Plan reuse: SQL Server can cache and reuse execution plans for procedures, reducing compilation overhead and stabilizing performance.
  • Centralized business logic: Data-related rules (validation, transformations, audit operations) are centralized and versioned at the database level, simplifying deployments and multi-application integration.
  • Security and least-privilege: Applications can be granted EXECUTE rights on procedures instead of direct table access, improving security boundaries and auditability.
  • Reduced SQL injection risk: The SQL logic is fixed inside the procedure; parameters are bound, greatly decreasing injection surfaces compared to concatenated SQL strings.
Quick Summary: Stored procedures send a single call over the network instead of many SQL strings — less round trips. Plans are compiled and cached on first run — subsequent calls skip parsing and optimization. Security is cleaner: grant EXECUTE on the procedure, not SELECT/INSERT on the tables. Logic stays in the database layer.
Q2:

Explain the difference between Stored Procedures and Functions in SQL Server.

Senior

Answer

Stored procedures and functions both encapsulate reusable logic, but they serve different purposes and behave differently inside SQL Server.

Stored procedures:

  • Primarily designed to perform actions: DML operations, transaction control, administrative tasks.
  • Can return multiple result sets and output parameters.
  • Cannot be directly used in SELECT, WHERE, or JOIN clauses.
  • Commonly used as API endpoints from the application layer.

Functions:

  • Intended for computation and value-returning logic.
  • Scalar functions return a single value; table-valued functions return a rowset.
  • Can be used inside SELECT, WHERE, and JOIN clauses like expressions or tables.
  • Have tighter restrictions: no explicit transaction control; most cannot perform data-modifying side effects.

In short: procedures orchestrate operations, while functions compute values that integrate with queries.

Quick Summary: Stored procedures can modify data, use transactions, output parameters, and don't have to return a result set. Functions must return a value, can't modify data (except table variables), and can be called inline in a SELECT. Inline TVFs act like parameterized views — they compose well; multi-statement TVFs often don't.
Q3:

Explain Scalar Functions vs Inline Table-Valued Functions vs Multi-Statement Table-Valued Functions.

Senior

Answer

SQL Server supports several function types, each with distinct performance and usage characteristics.

Scalar functions:

  • Return a single scalar value (e.g., INT, VARCHAR).
  • Evaluated per row when used in queries, often resulting in RBAR (Row-By-Agonizing-Row) execution.
  • Act as black boxes to the optimizer, frequently preventing parallelism and leading to severe performance issues.

Inline table-valued functions (iTVFs):

  • Return a table defined by a single RETURN (SELECT ...) statement.
  • Logically similar to parameterized views.
  • Fully inlined into the calling query, enabling the optimizer to generate efficient set-based plans with good cardinality estimates.
  • Usually the best-performing function type for set-based logic.

Multi-statement table-valued functions (mTVFs):

  • Declare an internal table variable and populate it across multiple statements.
  • By default, SQL Server assumes a fixed row estimate (historically 1 row), often leading to poor plans.
  • Limited statistics and cardinality information, frequently causing spills and suboptimal joins.

For performance-sensitive code, prefer inline TVFs over scalar or multi-statement TVFs whenever possible.

Quick Summary: Scalar: returns one value, called per row — can severely degrade performance in queries. Inline TVF: returns a table, SQL Server can inline it into the calling query and optimize it like a view — very efficient. Multi-statement TVF: builds a table row by row — opaque to the optimizer, often slow on large data.
Q4:

Why can Scalar Functions severely degrade performance?

Senior

Answer

Scalar functions often look clean and reusable but can be hidden performance killers in production workloads.

Reasons they degrade performance:

  • They are executed once per row in a result set, turning set-based operations into row-by-row computation.
  • The optimizer cannot see inside the function body, treating it as a black box. This prevents many optimizations and accurate cardinality estimates.
  • In many SQL Server versions, the presence of scalar functions in the SELECT or WHERE list can disable parallelism, forcing single-threaded plans for otherwise parallelizable queries.
  • They frequently increase CPU usage and query duration dramatically on large datasets.

Mitigations include:

  • Refactoring to inline TVFs or pure set-based SQL.
  • Replacing scalar functions with computed columns (possibly indexed) when appropriate.
  • Inlining logic into the main query where feasible.
Quick Summary: A scalar UDF called in a WHERE or SELECT clause runs once per row — on a million-row table, that's a million individual function calls. The optimizer often can't see inside and treats it as a black box, blocking parallelism and estimates. Replace with inline TVFs or inlined expressions whenever possible for big tables.
Q5:

What are Views? Explain their benefits and limitations.

Senior

Answer

A view is a named, virtual table defined by a SELECT query. It does not store data itself (unless indexed) but presents a reusable query abstraction.

Benefits:

  • Abstraction and simplification: Hides complex joins and expressions behind a simple interface.
  • Security: Exposes only selected columns/rows while hiding underlying schemas and tables.
  • Reuse: Centralizes business logic or filter logic so that many queries can benefit from a single definition.
  • Schema evolution: Application code can query the view even if underlying tables change, as long as the view's contract is preserved.

Limitations:

  • Views can become stacked (views on views), leading to overly complex, hard-to-tune execution plans.
  • Not all views are updatable; complex joins, aggregates, and DISTINCT can prevent direct DML.
  • They do not inherently improve performance unless combined with indexed views or used to encapsulate optimal query patterns.
Quick Summary: Views simplify complex queries by saving them as named objects. They enforce consistent access patterns and can restrict column-level access. Limitations: non-indexed views don't store data (they run on every access), can't use ORDER BY without TOP, and add a layer of indirection that occasionally confuses the optimizer.
Q6:

What are Indexed Views and when should you use them?

Senior

Answer

Indexed views materialize the result of a view and maintain it on disk like a physical table. They are backed by a clustered index and optionally additional nonclustered indexes.

When to use:

  • Heavy, repetitive aggregations or joins over large datasets (e.g., reporting queries over transactional tables).
  • Scenarios where read performance is critical and data changes are relatively moderate.
  • To precompute expensive expressions and reduce CPU usage under analytic workloads.

Trade-offs:

  • Every insert/update/delete on the underlying tables must update the indexed view, increasing DML overhead.
  • Strict requirements apply (e.g., schema binding, deterministic expressions).
  • Can complicate troubleshooting if developers are unaware of their presence.

They are powerful for read-intensive workloads but should be used selectively and measured carefully in write-heavy systems.

Quick Summary: Indexed views (materialized views) physically store the query result on disk with an index. Reads are instant — no recomputation. But writes to the base tables must maintain the view, adding overhead. Best for aggregation queries that run frequently on slowly-changing data. Requires SCHEMABINDING and strict query form.
Q7:

What are Triggers? Explain AFTER vs INSTEAD OF Triggers.

Senior

Answer

Triggers are special stored procedures that automatically execute in response to DML events (INSERT, UPDATE, DELETE) or certain DDL operations.

AFTER triggers:

  • Fire after the base operation logically succeeds but before the transaction commits.
  • Often used for auditing, logging, or enforcing complex constraints that span multiple tables.
  • Run within the transaction, so failures can cause rollbacks and increase latency.

INSTEAD OF triggers:

  • Fire instead of the original DML operation.
  • Commonly used on views to simulate complex update logic or route changes to multiple underlying tables.
  • Give full control over how changes are applied.

Both types must be designed carefully to avoid recursion, hidden performance issues, and unexpected side effects.

Quick Summary: AFTER trigger fires after the DML completes — used for auditing, cascading changes. INSTEAD OF trigger replaces the operation entirely — used on views that can't be directly modified, or to intercept and transform inserts/updates. Both have access to inserted and deleted virtual tables showing what changed.
Q8:

Why are Triggers generally not preferred in large systems?

Senior

Answer

Triggers are powerful but can introduce hidden complexity and performance problems in large systems.

Key concerns:

  • Hidden behavior: Business logic executes implicitly on DML, making the system harder to reason about. Developers may not realize why certain actions occur.
  • Chained effects: Triggers can call other triggers, causing cascading side effects that are difficult to debug and test.
  • Performance impact: Trigger logic runs inside the transaction. Long-running triggers increase lock durations and contention.
  • Maintainability: Logic spread across triggers and procedures leads to fragmented business rules and operational risk.

For these reasons, many teams prefer stored procedures, constraints, and explicit application-level logic over heavy trigger usage, reserving triggers for focused, unavoidable use cases (e.g., auditing).

Quick Summary: Triggers are invisible — they fire silently and can cause unexpected side effects. They slow down every DML operation on the table, even when you don't need the trigger behavior. They complicate debugging and make data changes non-obvious. Modern systems prefer explicit application logic or outbox patterns instead.
Q9:

Explain Window Functions and why they are essential in modern SQL.

Senior

Answer

Window functions (e.g., ROW_NUMBER(), RANK(), SUM() OVER (...)) allow calculations across sets of rows without collapsing them into a single result like GROUP BY does.

Why they are essential:

  • Enable ranking, running totals, moving averages, percentiles, and gap/density analysis in a single pass.
  • Reduce the need for self-joins and correlated subqueries, often resulting in cleaner and faster plans.
  • Can be combined with PARTITION BY and ORDER BY to support rich analytical queries directly in OLTP or reporting databases.
  • Help keep logic set-based and push computation into the database layer where it is highly optimized.
Quick Summary: Window functions operate across a set of rows related to the current row without collapsing them into groups. ROW_NUMBER, RANK, DENSE_RANK for ranking. LAG/LEAD for accessing neighboring rows. SUM/AVG OVER() for running totals. They replace complex self-joins and correlated subqueries with clean, readable SQL.
Q10:

Explain PARTITION BY and ORDER BY within Window Functions.

Senior

Answer

Window functions operate over a logical window of rows, defined by PARTITION BY and ORDER BY clauses in the OVER() expression.

PARTITION BY:

  • Divides the result set into groups (partitions) for independent calculations, similar to grouping in analytics.
  • Example: SUM(SalesAmount) OVER (PARTITION BY CustomerId) gives total sales per customer.

ORDER BY:

  • Defines the sequence of rows within each partition.
  • Required for ranking and running calculations like ROW_NUMBER(), cumulative sums, and moving averages.

In summary, PARTITION BY defines the scope and ORDER BY defines the sequence of the window.

Quick Summary: PARTITION BY divides rows into groups for the window function calculation — like GROUP BY but without collapsing rows. ORDER BY inside the window defines the order of rows within each partition for ranking or running calculations. Together: ROW_NUMBER() OVER (PARTITION BY Dept ORDER BY Salary) ranks employees within each department.
Q11:

What is Table Partitioning? How does SQL Server implement it?

Senior

Answer

Table partitioning horizontally splits a large table into smaller, manageable partitions based on a partition key (often date or range-based IDs). Logically it remains a single table, but physically data is separated.

SQL Server implementation:

  • Partition function: Defines boundary points for key ranges.
  • Partition scheme: Maps partitions to one or more filegroups.
  • The table or index is created ON the partition scheme, distributing rows across partitions based on the key.

Benefits:

  • Maintenance operations (index rebuilds, statistics, archiving) can be targeted at specific partitions.
  • Supports sliding window load/archive patterns via partition SWITCH operations.
  • Can improve query performance via partition elimination, especially for time-sliced workloads.

Partitioning is not automatically faster; it must align with query predicates and maintenance strategies to be effective.

Quick Summary: Table partitioning divides a large table into multiple physical filegroups based on a partition function (range of values) and partition scheme (maps ranges to filegroups). SQL Server can eliminate partitions that don't match the query filter. Makes archiving simple: switch old partitions out without touching other data.
Q12:

Explain Partition Elimination with an example scenario.

Senior

Answer

Partition elimination occurs when SQL Server restricts I/O to only those partitions that might contain relevant rows, instead of scanning all partitions.

Example: A table partitioned by OrderDate by month. A query filtered on OrderDate BETWEEN '2024-01-01' AND '2024-01-31' can read only the January partition if:

  • The filter is directly on the partition key.
  • There are no non-SARGable expressions on the partition column.
  • The data types and collation match exactly.

If functions like CONVERT() or different data types are used on the partition column, partition elimination may fail and all partitions may be scanned, losing the performance benefit.

Quick Summary: Partition elimination is when SQL Server skips partitions that can't contain matching rows. A query filtering WHERE OrderDate BETWEEN '2024-01-01' AND '2024-03-31' on a table partitioned by month only scans Q1 partitions and ignores the rest. This dramatically reduces I/O on large, date-partitioned tables.
Q13:

What are the typical causes of poor execution plan performance?

Senior

Answer

Poor execution plans are usually symptoms of deeper issues in schema design, statistics, or query patterns.

Common causes:

  • Missing or inappropriate indexes: Forcing table scans or expensive lookups.
  • Stale or missing statistics: Leading to incorrect row estimates and wrong join strategies.
  • Parameter sniffing: Plan optimized for one parameter value, reused for others with different data distributions.
  • Scalar functions and multi-statement TVFs: Preventing optimization and parallelism.
  • Complex views over views: Obscure actual data access and create bloated plans.
  • Implicit conversions: Causing non-SARGable predicates or index misses.
  • RBAR patterns (cursors, loops): Neglecting set-based approaches.

Effective tuning often involves query simplification, better indexing, and statistics maintenance rather than just tweaking server settings.

Quick Summary: Common causes of poor plans: stale statistics (wrong row estimates), parameter sniffing (plan built for atypical values), missing indexes (forcing table scans), implicit conversions (can't use indexes), SARGability issues (function on column), and lock escalation or blocking distorting plan behavior.
Q14:

What is Parameter Sniffing and how do you handle it?

Senior

Answer

Parameter sniffing occurs when SQL Server compiles a plan using the initial parameter values it sees, then reuses that plan for subsequent executions. If data distribution is skewed, one plan may not fit all parameter scenarios.

Symptoms: Some calls are lightning fast, others very slow, using the same procedure and query shape.

Handling strategies:

  • Use OPTION (RECOMPILE) for highly skewed queries where compilation cost is acceptable.
  • Use OPTIMIZE FOR UNKNOWN or OPTIMIZE FOR (@param = ...) hints to choose more robust plans.
  • Capture parameters in local variables inside the procedure to discourage sniffing and produce more average plans.
  • Split logic into separate procedures for different parameter ranges if patterns are distinct.
  • Use Query Store to force stable plans when appropriate.
Quick Summary: Parameter sniffing caches a plan based on first-call parameters. If subsequent calls use different values, the cached plan may be inefficient. Fixes: OPTION(RECOMPILE) forces fresh compilation each time; OPTIMIZE FOR UNKNOWN makes the optimizer ignore sniffed values; local variables break plan reuse entirely.
Q15:

Explain Execution Plan Caching and Reuse.

Senior

Answer

SQL Server uses a plan cache to store compiled execution plans so that subsequent executions of the same (or parameterized) queries can skip the compilation phase.

Benefits:

  • Reduces CPU overhead from repeated compilations.
  • Improves response time for frequently executed queries and stored procedures.

Challenges:

  • Poorly parameterized or ad-hoc queries can cause plan cache bloat, with many single-use plans.
  • Parameter sniffing issues stem from plan reuse across different parameter values.
  • Schema changes or statistics updates can invalidate plans, causing recompilation spikes.

Best practice is to use parameterized queries, monitor plan cache size and reuse patterns, and leverage Query Store to manage problematic plans.

Quick Summary: SQL Server caches compiled execution plans in the plan cache keyed by the query text hash. Matching queries reuse the plan without reparsing and reoptimizing. Plan reuse saves CPU but can backfire with parameter sniffing. sp_executesql with parameters enables safe plan reuse better than string concatenation.
Q16:

What is Schema Binding and why is it important?

Senior

Answer

Schema binding associates a view or function tightly with the underlying schema so that the referenced objects cannot be modified in incompatible ways without first changing the bound object.

Importance:

  • Required for indexed views and some computed columns, ensuring structural stability.
  • Prevents accidental changes (e.g., dropping or altering columns) that would silently break dependent logic.
  • Helps enforce contracts between application code, views, and functions.

In high-governance environments, schema binding is a tool to ensure that refactoring is deliberate and fully impact-assessed.

Quick Summary: SCHEMABINDING ties a function or view to the objects it references — you can't drop or alter those objects without first modifying the dependent object. Required for indexed views. It also enables the optimizer to inline simple functions more aggressively and prevents accidental schema changes from breaking dependencies.
Q17:

Explain the concept of set-based operations vs row-by-row operations.

Senior

Answer

SQL is fundamentally a set-based language: it is designed to operate on collections of rows at once, not one row at a time.

Set-based operations:

  • Leverage the optimizer to choose efficient algorithms (hash joins, merges, batch operations).
  • Use fewer logical and physical reads for bulk operations.
  • Scale more gracefully as data volume grows.

Row-by-row (RBAR) operations:

  • Implement logic via cursors or loops, processing one row at a time.
  • Usually lead to excessive context switching, locking overhead, and long runtimes.
  • Only justified for very complex, inherently procedural business rules.

Senior-level SQL design focuses on transforming requirements into set-based patterns whenever possible, often with window functions, joins, and properly designed queries.

Quick Summary: Set-based operations (UPDATE, INSERT, DELETE on sets of rows) leverage SQL Server's ability to process many rows in one pass using optimized algorithms and parallelism. Row-by-row processing (cursors, WHILE loops) forces sequential iteration — slow, can't parallelize, and often orders of magnitude slower on large data.
Q18:

What are Cursors and why are they usually discouraged?

Senior

Answer

Cursors are database objects that allow row-by-row traversal of a result set, similar to iterators in procedural languages.

Reasons they are discouraged:

  • They process data one row at a time, leading to poor performance on large sets.
  • They often hold locks for a long time, reducing concurrency.
  • They introduce complex, procedural logic that is harder to maintain and test.
  • They typically have higher memory and tempdb overhead compared to set-based alternatives.

Cursors should be a last resort, used only when set-based solutions are impractical or impossible. In many cases, window functions, MERGE statements, or carefully written set-based updates can replace cursor logic.

Quick Summary: Cursors iterate over result sets one row at a time. For large datasets this is extremely slow — it bypasses SQL Server's set-based engine. Most cursor logic can be rewritten with set-based updates, window functions, or CTEs. Use cursors only when row-by-row processing is genuinely unavoidable, like calling a stored procedure per row.
Q19:

Explain the difference between Physical Reads, Logical Reads, and Page Life Expectancy.

Senior

Answer

These metrics help diagnose I/O and memory performance in SQL Server.

Logical reads: The number of 8KB pages read from the buffer cache (memory). High logical reads can indicate inefficient queries or missing indexes.

Physical reads: Pages read from disk because they were not found in the buffer cache. Physical I/O is orders of magnitude slower than memory access.

Page Life Expectancy (PLE): An indicator of how long pages stay in the buffer cache before being evicted. A consistently low PLE suggests memory pressure or inefficient queries repeatedly flushing the cache.

Senior engineers use these metrics together to determine whether to focus on query tuning, indexing, or adding memory/storage capacity.

Quick Summary: Physical Read: SQL Server fetched the page from disk (slow). Logical Read: page was found in the buffer pool (fast, in-memory). Logical reads measure how much work a query did in memory — lower is better. Page Life Expectancy (PLE) shows how long pages stay in the buffer pool — low PLE means memory pressure and frequent disk reads.
Q20:

What is SARGability and why is it critical for performance?

Senior

Answer

SARGability (Search ARGument-ability) describes whether a predicate can efficiently use an index to seek rows instead of scanning.

SARGable predicates:

  • Simple comparisons like Column = @Value, Column >= @Start AND Column <= @End.
  • Allow the optimizer to perform index seeks and range scans.

Non-SARGable patterns:

  • Applying functions to columns: WHERE LOWER(Col) = 'abc'.
  • Expressions on the left side: WHERE Col + 1 = 10.
  • Implicit conversions that change the column's data type.
  • Leading wildcards: LIKE '%abc'.

Ensuring predicates are SARGable is one of the most impactful techniques in query tuning: it allows indexes to be used effectively, minimizing reads and dramatically improving performance.

Quick Summary: SARGable (Search ARGument ABLE) expressions let SQL Server use an index to seek matching rows. Non-SARGable expressions wrap the column in a function or implicit conversion — forcing a scan. SARGable: WHERE LastName = 'Smith'. Non-SARGable: WHERE UPPER(LastName) = 'SMITH' or WHERE CAST(Id AS varchar) = '123'.
Q21:

What Is TempDB and Why Is It Critical for SQL Server Performance?

Senior

Answer

TempDB is SQL Server’s global workspace used by all users and internal operations. It is essential because it handles:

  • Sorting operations (ORDER BY, GROUP BY)
  • Hash joins and aggregations
  • Row versioning for snapshot isolation
  • Temporary tables and table variables
  • Intermediate spill data during execution
  • Cursors and internal worktables

If TempDB becomes slow or suffers contention, the entire SQL Server instance slows down. Proper sizing, fast storage, and multiple data files are critical for performance.

Quick Summary: TempDB is a shared system database recreated on SQL Server restart. It holds temp tables, table variables, sort and hash spill buffers, row version store for snapshot isolation, and DBCC intermediate data. All user sessions share it — heavy concurrent use creates allocation contention on specific system pages.
Q22:

What Is TempDB Contention and What Causes It?

Senior

Answer

TempDB contention occurs when multiple threads compete for the same allocation pages (PFS, GAM, SGAM) or metadata structures.

Typical causes include:

  • Too few TempDB data files
  • Heavy use of temp tables/table variables
  • Large sorting or hashing operations
  • High row versioning pressure

Fixes: increase file count, equal-size files, optimize workload, reduce spills.

Quick Summary: TempDB contention typically hits PFS, GAM, and SGAM pages — special allocation pages that every new temp object allocation touches. In high-concurrency workloads, many sessions compete for the same pages. Fix: add multiple TempDB data files (one per CPU core up to 8), enable trace flag 1118, use SQL Server 2016+ with automatic improvement.
Q23:

What Are Data Pages and Index Pages in SQL Server?

Senior

Answer

SQL Server stores all data in 8 KB pages.

  • Data pages store full table rows.
  • Index pages store B-tree navigation structures and pointers.

Understanding pages explains index fragmentation, logical reads, and physical I/O behavior.

Quick Summary: SQL Server stores data in 8KB pages. Data pages hold actual table rows. Index pages hold B-tree index entries at each level — leaf pages point to actual data rows (or contain them for clustered indexes). Each page belongs to one object and is managed by the buffer pool for caching.
Q24:

What Is the Buffer Pool and How Does SQL Server Use It?

Senior

Answer

The buffer pool is SQL Server’s memory area used for caching data and index pages.

  • Logical reads come from memory.
  • Physical reads occur only when data is missing in cache.
  • High buffer reuse improves performance.

If buffer pool is too small: more disk reads, lower PLE, and slower query execution.

Quick Summary: The buffer pool is SQL Server's main memory cache. When data is needed, SQL Server reads the page from disk into the buffer pool. Future reads hit memory (logical read) instead of disk (physical read). SQL Server uses all available memory for the buffer pool — giving it more RAM directly improves read performance.
Q25:

What Is Page Life Expectancy (PLE) and What Does It Indicate?

Senior

Answer

Page Life Expectancy (PLE) measures how long pages remain in the buffer pool before eviction.

High PLE = good memory health.
Low PLE = memory pressure, excessive physical reads.

Common causes of low PLE: bad queries, missing indexes, large scans, spills.

Quick Summary: PLE measures how long (in seconds) a data page stays in the buffer pool before being pushed out by new pages. A healthy PLE is typically 300+ seconds (5 minutes). Dropping PLE means SQL Server is constantly evicting and re-reading pages — a sign of memory pressure or queries doing large scans.
Q26:

How Does the SQL Server Transaction Log Work Internally?

Senior

Answer

The transaction log is an append-only structure that records all modifications. SQL Server uses Write-Ahead Logging (WAL):

  • Log records are written to disk first
  • Data pages update later

This ensures atomicity and durability. Log truncation depends on checkpoints and active transactions.

Quick Summary: Every change is written to the transaction log before hitting data pages (Write-Ahead Logging). The log is sequential — fast to write. At CHECKPOINT, dirty pages flush to disk. On crash, SQL Server reads the log: redo commits that weren't in data files, undo uncommitted transactions. This sequence ensures full recoverability.
Q27:

What Is the Purpose of Checkpoints in SQL Server?

Senior

Answer

Checkpoints flush dirty pages (modified pages) from memory to disk.

Benefits:

  • Shorter crash recovery time
  • Enables transaction log truncation
  • Reduces buffer pool pressure
Quick Summary: CHECKPOINT writes all dirty pages (modified in buffer pool but not yet on disk) to data files. This shortens crash recovery — SQL Server only needs to replay log records after the last checkpoint. Without checkpoints, recovery would require replaying the entire log from the beginning, which could take hours.
Q28:

What Are Dirty Pages and Clean Pages in SQL Server?

Senior

Answer

Dirty pages: modified in buffer pool but not yet persisted to disk.
Clean pages: identical to the disk version.

Checkpoints convert dirty pages to clean pages. Too many dirty pages increase recovery time and degrade performance.

Quick Summary: Dirty pages are buffer pool pages that have been modified but not yet written to disk. Clean pages are in sync with disk. CHECKPOINT flushes dirty pages. Tracking dirty vs. clean pages is how SQL Server knows exactly what needs to be written during a checkpoint and what needs to be redone after a crash.
Q29:

What Are Latches and How Do They Differ From Locks?

Senior

Answer

Latches protect internal memory structures and are non-transactional.

Locks protect logical data consistency and last for transaction duration.

Latch waits = engine pressure.
Lock waits = concurrency or blocking issues.

Quick Summary: Latches are lightweight, short-lived synchronization objects for physical consistency of in-memory pages — they don't appear in deadlock graphs and aren't tracked like locks. Locks are for logical, transaction-level concurrency — they persist for the transaction duration and show up in blocking queries and deadlocks.
Q30:

What Is Lock Escalation and How Does It Impact Performance?

Senior

Answer

Lock escalation converts many row/page locks into a single table lock to reduce overhead.

Impact:

  • Fewer locks = lower memory use
  • But higher blocking risk
Quick Summary: When individual row or page locks hit a threshold (~5,000), SQL Server escalates to a single table lock to reduce lock manager overhead. This improves efficiency but completely blocks other transactions from accessing the table. Problematic in high-concurrency workloads — can cause cascading blocking chains.
Q31:

What Is the Cardinality Estimator and Why Is It Important?

Senior

Answer

The Cardinality Estimator (CE) predicts row counts used to choose join types, memory grants, and access paths.

Poor estimates ? poor execution plans.

CE depends on statistics, data distribution, and query structure.

Quick Summary: The cardinality estimator predicts how many rows each plan step will produce based on statistics histograms. Wrong predictions lead to wrong join types, bad memory grants, and poor index choices. SQL Server 2014 introduced a new CE (model 120); older compatibility levels use the legacy model. Statistics quality is the root input.
Q32:

What Are Statistics in SQL Server and How Do They Affect Performance?

Senior

Answer

Statistics describe data distribution that SQL Server uses to estimate row counts.

Bad or outdated stats ? misestimation ? poor plans ? slow queries.

Quick Summary: Statistics are metadata objects containing histograms of column value distribution. SQL Server uses them to estimate selectivity — how many rows a filter returns. Good statistics → accurate estimates → efficient plans. Stale statistics → wrong estimates → scans instead of seeks, wrong joins, wasted memory grants.
Q33:

What Are Memory Grants and Why Do They Matter?

Senior

Answer

SQL Server allocates memory for sorting, hashing, and aggregations.

Incorrect grants:

  • Too small ? spills to TempDB
  • Too large ? resource starvation
Quick Summary: Memory grants are pre-allocated memory for sort and hash operations in a query. SQL Server estimates how much memory is needed based on cardinality estimates. Grant too small → spill to TempDB (slow). Grant too large → other queries starved of memory. Fix by updating statistics for accurate row estimates.
Q34:

What Is a Spill to TempDB and Why Does It Occur?

Senior

Answer

A spill happens when the memory grant is too small and SQL Server must offload intermediate data to TempDB.

Common causes: inaccurate cardinality estimation, heavy sorts, hash joins, window functions.

Quick Summary: A spill to TempDB happens when SQL Server allocated less memory than a sort or hash operation actually needed. Overflow data is written to TempDB disk, which is orders of magnitude slower than memory. Caused by underestimated row counts (bad statistics). Look for sort warnings and hash warnings in execution plans.
Q35:

What Is the SQL Server Query Processor?

Senior

Answer

The Query Processor:

  • Parses T-SQL
  • Binds objects
  • Optimizes using cost-based rules
  • Generates physical plans
Quick Summary: The query processor is the component that takes your SQL statement and executes it. It includes the parser (syntax check), algebrizer (binding to objects), query optimizer (plan selection), and execution engine (running the plan). The query processor is why well-written SQL runs fast — and why poorly-written SQL does not.
Q36:

What Are Logical and Physical Operators in Execution Plans?

Senior

Answer

Logical operators represent conceptual operations (join, project, filter).

Physical operators are real engine implementations (hash join, nested loop, merge join).

Quick Summary: Logical operators are the high-level operations in the plan: Filter, Join, Aggregate, Sort. Physical operators are the specific algorithms chosen: Nested Loops (for join), Hash Match (for join or aggregate), Sort (for ordering). One logical operation maps to one physical operator — understanding both helps you read execution plans.
Q37:

How Does SQL Server Choose Join Algorithms?

Senior

Answer

Join selection depends on:

  • Row count estimates
  • Sort order
  • Indexes
  • Estimated CPU and I/O cost
Quick Summary: SQL Server picks join algorithms based on input size and available indexes. Small outer table with indexed inner → Nested Loops. Both inputs sorted by join key → Merge Join (fast, no extra memory). Large unsorted inputs → Hash Match (memory-heavy, may spill). Missing indexes push everything toward Hash Match.
Q38:

What Is a Hash Match and Why Is It Heavy on TempDB?

Senior

Answer

A hash match creates an in-memory hash table for joins and aggregates.

If memory is insufficient ? spills to TempDB ? huge slowdown.

Quick Summary: Hash Match builds a hash table from the smaller input (build phase), then scans the larger input and probes the hash table (probe phase). Build phase requires memory — if underestimated, it spills to TempDB. Heavy on both memory and TempDB I/O. Common when indexes are missing and inputs are large.
Q39:

What Are Log Buffers and How Do Log Flushes Work?

Senior

Answer

Changes go to an in-memory log buffer. A log flush occurs:

  • On transaction commit
  • When buffer fills
  • On checkpoint

Slow log storage = slow commits.

Quick Summary: Log records are written to in-memory log buffers first, then flushed to disk on transaction commit, every 60KB of log buffer, or every second. COMMIT can't return until the log record is hardened to disk — that's what makes transactions durable. Log flushes are sequential writes — far faster than random data page I/O.
Q40:

What Is the Difference Between Row Store and Column Store in SQL Server?

Senior

Answer

Row store: row-by-row storage; best for OLTP.

Column store: column-by-column storage; best for analytics, high compression, parallel execution.

Quick Summary: Row store organizes data by rows — all columns for a row are stored together. Good for OLTP (point lookups, row modifications). Column store organizes data by column — ideal for analytics where queries read a few columns across millions of rows. Column store also compresses heavily and enables batch-mode processing.

Curated Sets for MS SQL

No curated sets yet. Group questions into collections from the admin panel to feature them here.

Ready to level up? Start Practice