Top MS SQL Interview Questions

Curated MS SQL interview questions and answers across difficulty levels.

Last updated:

MS SQL Interview Questions & Answers

Skip to Questions

Welcome to our comprehensive collection of MS SQL interview questions and answers. This page contains expertly curated interview questions covering all aspects of MS SQL, from fundamental concepts to advanced topics. Whether you're preparing for an entry-level position or a senior role, you'll find questions tailored to your experience level.

Our MS SQL interview questions are designed to help you:

  • Understand core concepts and best practices in MS SQL
  • Prepare for technical interviews at all experience levels
  • Master both theoretical knowledge and practical application
  • Build confidence for your next MS SQL interview

Each question includes detailed answers and explanations to help you understand not just what the answer is, but why it's correct. We cover topics ranging from basic MS SQL concepts to advanced scenarios that you might encounter in senior-level interviews.

Use the filters below to find questions by difficulty level (Entry, Junior, Mid, Senior, Expert) or focus specifically on code challenges. Each question is carefully crafted to reflect real-world interview scenarios you'll encounter at top tech companies, startups, and MNCs.

Questions

160 questions
Q1:

What is MS SQL Server and where is it commonly used?

Entry

Answer

MS SQL Server is Microsoft’s enterprise relational database system used in finance, e-commerce, SaaS, ERPs, CRMs, and large web applications. It provides secure, consistent, and high-performance data storage.

Q2:

Explain Database, Table, Row, and Column in relational terms.

Entry

Answer

A database stores all data objects. A table contains structured data. A row represents one record. A column represents a specific data field with a defined type.

Q3:

What is a Primary Key and why is it important?

Entry

Answer

A primary key uniquely identifies each row. It ensures uniqueness, improves indexing, and maintains reliable data integrity.

Q4:

What is a Foreign Key and what problem does it solve?

Entry

Answer

A foreign key creates a relationship between tables and prevents orphaned or invalid data by enforcing referential integrity.

Q5:

What are Constraints in MS SQL?

Entry

Answer

Constraints like primary key, foreign key, unique, check, not null, and default rules ensure accurate, valid, and reliable data.

Q6:

What is Normalization and why do we use it?

Entry

Answer

Normalization organizes data to eliminate redundancy, improve consistency, and reduce anomalies during insert, update, or delete operations.

Q7:

Explain 1NF, 2NF, and 3NF briefly.

Entry

Answer

1NF: No repeating groups; atomic values.
2NF: Remove partial dependencies.
3NF: Remove transitive dependencies.

Q8:

What is Denormalization and when is it used?

Entry

Answer

Denormalization introduces controlled redundancy to improve read performance, especially in reporting and analytics systems.

Q9:

What is an Index and why is it important?

Entry

Answer

An index speeds up data searches by avoiding full table scans. Too many indexes slow down inserts/updates due to maintenance.

Q10:

What is the difference between Clustered and Non-Clustered Index?

Entry

Answer

Clustered index: Defines physical row order.
Non-clustered index: A separate structure pointing to actual rows.

Q11:

What is a View and why is it used?

Entry

Answer

A view is a virtual table created from a query. Used for simplicity, security, reusability, and hiding complex queries.

Q12:

What is a Stored Procedure and what is it used for?

Entry

Answer

A stored procedure is a precompiled SQL code block used for business logic, improving performance and reducing network overhead.

Q13:

What are User-Defined Functions (UDFs)?

Entry

Answer

UDFs return calculated values based on parameters. Used for reusable logic, validation, and transformations.

Q14:

What is a Transaction and why is it important?

Entry

Answer

A transaction ensures a group of SQL operations is treated as a single unit—either all succeed or all fail (ACID compliance).

Q15:

Explain ACID properties in SQL.

Entry

Answer

Atomicity: All-or-nothing.
Consistency: Valid states only.
Isolation: No interference between transactions.
Durability: Data persists after commit.

Q16:

What is a Deadlock in SQL Server?

Entry

Answer

A deadlock occurs when transactions block each other while waiting for resources. SQL Server resolves it by killing one transaction.

Q17:

What is Locking in MS SQL Server?

Entry

Answer

Locking controls concurrent access using shared (read), exclusive (write), and update locks to maintain data consistency.

Q18:

What is a Schema in SQL?

Entry

Answer

A schema is a logical container grouping tables, views, and procedures for better organization and security.

Q19:

Difference between DELETE and TRUNCATE?

Entry

Answer

DELETE: Row-by-row removal, fully logged.
TRUNCATE: Deallocates pages, extremely fast, minimal logging.

Q20:

What is an Execution Plan and why is it important?

Entry

Answer

An execution plan shows how SQL Server executes a query—joins, indexes, scans, seeks. It is vital for tuning slow queries.

Q21:

What is the main purpose of indexing in SQL Server?

Entry

Answer

Indexing improves data retrieval speed by avoiding full table scans.

Indexes act like optimized search maps that help SQL Server locate rows faster.

However, too many indexes slow down write performance due to maintenance overhead.

Q22:

How does SQL Server decide between a Table Scan, Index Scan, and Index Seek?

Entry

Answer

SQL Server chooses access methods based on filters, available indexes, and cost estimates.

Table Scan: Reads all rows when no useful index exists.

Index Scan: Reads all index entries for broad filters.

Index Seek: Most efficient; jumps directly to matching rows.

Q23:

What is a Composite Index and when is it used?

Entry

Answer

A composite index includes multiple columns, used when queries filter or sort using those column combinations.

The order of columns is important because SQL Server can only efficiently use leading key columns.

Q24:

What is Bookmark Lookup and why can it cause performance problems?

Entry

Answer

Bookmark lookup occurs when SQL uses a non-clustered index but must fetch extra columns from the base table.

This becomes slow when many rows match, increasing I/O and reducing performance.

Q25:

What is a Covering Index and why is it powerful?

Entry

Answer

A covering index contains all columns needed for a query.

SQL Server can return results from the index alone without touching the table.

This greatly improves performance for read-heavy queries.

Q26:

What are SQL Server Statistics and why are they essential?

Entry

Answer

Statistics describe data distribution in columns.

They help the optimizer estimate row counts and choose efficient execution plans.

Q27:

How do outdated or missing statistics affect query performance?

Entry

Answer

Outdated statistics produce inaccurate row estimates.

SQL Server may choose inefficient join types or scans, causing slow performance.

Regular statistic updates maintain optimal plans.

Q28:

What is Parameter Sniffing and why does it occur?

Entry

Answer

SQL Server caches execution plans for reuse.

Parameter sniffing occurs when a cached plan works well for one parameter but poorly for another due to data skew.

Q29:

What is an Execution Plan and how does SQL Server generate it?

Entry

Answer

An execution plan is SQL Server’s strategy for running a query.

The optimizer evaluates multiple plan options and picks the lowest-cost one.

Q30:

What is the difference between Estimated and Actual Execution Plans?

Entry

Answer

Estimated plans show predicted row counts before execution.

Actual plans include real runtime row counts and performance metrics.

Q31:

What is Cardinality Estimation and why is it important?

Entry

Answer

Cardinality estimation predicts how many rows a query will process.

Accurate estimates are critical for choosing efficient join types and memory usage.

Q32:

What are Index Fragmentation and Fill Factor?

Entry

Answer

Fragmentation occurs when index pages become disordered, slowing scans and seeks.

Fill factor controls how much free space to leave during index creation to reduce fragmentation.

Q33:

What is the Query Optimizer and how does it work?

Entry

Answer

The optimizer evaluates many query strategies using statistics and metadata.

It selects the lowest estimated cost plan.

Q34:

What is a Filtered Index and when is it beneficial?

Entry

Answer

A filtered index stores only rows matching a condition.

It reduces index size and improves seek performance for selective queries.

Q35:

What is a Hint and when should it be used?

Entry

Answer

Hints override optimizer decisions by forcing specific behavior.

They should be used only when the optimizer consistently chooses suboptimal plans.

Q36:

What is a Hotspot in indexing terms?

Entry

Answer

A hotspot occurs when many concurrent operations target the same physical index location.

This causes locking and contention, reducing performance.

Q37:

What is the Role of the Query Store in SQL Server?

Entry

Answer

Query Store tracks query history, execution plans, and performance trends.

It helps diagnose regressions and enforce stable plans.

Q38:

What is an Index Seek Predicate vs Residual Predicate?

Entry

Answer

Seek predicates allow precise navigation through an index.

Residual predicates apply filters after the seek when the index does not fully cover the query.

Q39:

Why does SQL Server choose a Hash Match instead of Nested Loop or Merge Join?

Entry

Answer

SQL Server uses hash match for large, unsorted datasets where hashing is cheaper than repeated seeks.

Poor row estimates may also cause SQL to choose hashing, sometimes leading to spills.

Q40:

Explain the internal structure of a SQL Server index (Clustered vs Non-Clustered). How does it affect performance?

Junior

Answer

Clustered Index: Defines the physical order of rows in the table. Implemented as a B-Tree with root, intermediate, and leaf nodes. The leaf level contains actual data rows.

Non-Clustered Index: Also a B-Tree, but leaf nodes store index keys and row locators (RID or clustered key).

Performance Impact: Clustered indexes help range queries, while non-clustered indexes help quick lookups. Poor clustered key choice can make indexes large and slow.

Q41:

What is an Index Seek vs Index Scan vs Table Scan? When is each used?

Junior

Answer

Index Seek: SQL jumps directly to matched rows. Happens with selective filters and proper indexes.

Index Scan: SQL reads entire index. Happens with broad filters or missing indexes.

Table Scan: Reads whole table. Used when no index exists or table is small.

Q42:

How does SQL Server choose an Execution Plan?

Junior

Answer

SQL Server uses a cost-based optimizer that parses the query, evaluates different plan options, estimates CPU/I/O cost, and chooses the lowest-cost plan.

Q43:

What are Statistics? Why do bad statistics slow queries?

Junior

Answer

Statistics contain value distribution info. SQL uses them to estimate row counts. Bad or outdated statistics cause wrong estimations, leading to slow plans.

Q44:

What is a Covering Index and when should you create one?

Junior

Answer

A covering index contains all columns needed for a query. It removes key lookups and improves performance in read-heavy workloads.

Q45:

What is a Key Lookup? Why is it expensive?

Junior

Answer

Key lookup happens when a non-clustered index finds a row but SQL needs extra columns. It causes many random I/O operations and slows queries.

Q46:

Explain Lock Escalation in SQL Server.

Junior

Answer

When too many row/page locks exist, SQL upgrades to a table lock. This reduces overhead but lowers concurrency.

Q47:

What is Page Splitting? How does Fill Factor help?

Junior

Answer

Page splitting happens when SQL inserts rows into a full page. It increases fragmentation. Fill Factor leaves free space to reduce page splits.

Q48:

Explain SQL Server Memory Architecture (Buffer Pool, Plan Cache).

Junior

Answer

Buffer Pool: Stores data and index pages.

Plan Cache: Stores compiled query plans for reuse.

Q49:

What is Parameter Sniffing and how do you fix it?

Junior

Answer

SQL optimizes a plan based on first-used parameter value. Fix options include using RECOMPILE, OPTIMIZE FOR UNKNOWN, or rewriting queries.

Q50:

When should you use a Filtered Index?

Junior

Answer

Use when queries target specific subsets (e.g., Active records). Improves performance and reduces index size.

Q51:

What is a Heap? Should you use it?

Junior

Answer

A heap is a table without clustered index. Good for bulk loads but slow for lookups. Most tables should have a clustered index.

Q52:

What is Partitioning in SQL Server?

Junior

Answer

Partitioning splits a table into smaller pieces, improving maintenance and performance for large datasets.

Q53:

What is Query Store and why is it useful?

Junior

Answer

Query Store keeps execution history, performance metrics, and helps detect regressions. It allows forcing stable plans.

Q54:

What is Cardinality Estimation in SQL Server?

Junior

Answer

It predicts row counts for query operations. Accurate estimates help SQL choose the best join and operator strategy.

Q55:

Explain Memory Grants in SQL Server.

Junior

Answer

SQL requests memory for sorting and hashing. Over-grants cause waits; under-grants cause spills to TempDB.

Q56:

What are Hot Spots in indexing?

Junior

Answer

Hotspots occur when many inserts hit the same index page (e.g., identity keys). They cause blocking and contention.

Q57:

Pessimistic vs Optimistic Concurrency in SQL Server.

Junior

Answer

Pessimistic uses locks to avoid conflicts. Optimistic uses row versioning to detect conflicts at commit without blocking readers.

Q58:

What are Wait Types in SQL Server?

Junior

Answer

Wait types show where SQL is spending time (CPU, I/O, locks). Examples: CXPACKET, PAGEIOLATCH, LCK_M_X.

Q59:

Explain TempDB internals and why it affects performance.

Junior

Answer

TempDB stores temp tables, spills, versioning, and intermediate results. Best practices include multiple equal-sized data files and SSD storage.

Q60:

Explain in detail how SQL Server processes different types of JOINs internally.

Mid

Answer

SQL Server evaluates JOINs using physical operators such as Nested Loops, Merge Join, and Hash Join. INNER, LEFT, RIGHT, and FULL JOINs all use these strategies depending on data size, indexes, and sorting.

INNER JOIN: Returns matching rows only.

LEFT/RIGHT JOIN: Returns matching rows plus NULL for non-matching.

FULL JOIN: Returns all matches + unmatched from both sides.

CROSS JOIN: Cartesian product; typically expensive.

Proper indexing affects which join operator SQL Server chooses.

Q61:

Explain Nested Loops, Merge Join, and Hash Join with when each is chosen.

Mid

Answer

Nested Loops: Best for small outer input and indexed inner table. Great for OLTP random lookups.

Merge Join: Requires sorted inputs. Very fast for large, sorted datasets.

Hash Join: Best for large, unsorted sets. Spills to TempDB if memory is insufficient.

Q62:

What is a Transaction? Explain ACID with real SQL Server implications.

Mid

Answer

A transaction is a unit of work ensured by ACID properties:

  • Atomicity: All-or-nothing behavior.
  • Consistency: Constraints must remain valid.
  • Isolation: Controls concurrency.
  • Durability: Committed data survives crashes via logging.
Q63:

Explain SQL Server Transaction Isolation Levels with practical use-cases.

Mid

Answer

Isolation levels include:

READ UNCOMMITTED: Allows dirty reads; used in analytics.

READ COMMITTED: Default; prevents dirty reads.

RCSI: Uses version store; avoids blocking.

REPEATABLE READ: Prevents non-repeatable reads.

SERIALIZABLE: Highest isolation; heavy locking.

SNAPSHOT: Uses row versioning; avoids shared locks.

Q64:

Explain Pessimistic vs Optimistic Concurrency in SQL Server.

Mid

Answer

Pessimistic: Uses locks to avoid conflicts; good for heavy-write systems.

Optimistic: Uses row versioning; detects conflicts at commit.

Q65:

What is Deadlock? How does SQL Server detect and resolve it?

Mid

Answer

A deadlock occurs when sessions wait on each other indefinitely. SQL Server detects deadlocks every 5 seconds and selects a victim to kill.

Prevention includes using consistent resource order, short transactions, and proper indexes.

Q66:

Explain Lock Types (Shared, Update, Exclusive, Intent).

Mid

Answer

Shared (S): For reading.

Exclusive (X): For writing.

Update (U): Prevents deadlocks during read-to-write transitions.

Intent Locks: Used to manage lock hierarchy efficiently.

Q67:

What are Lock Waits and Blocking Chains? How do you debug them?

Mid

Answer

Blocking occurs when a session holds a lock required by another. Debugging tools include:

  • sp_whoisactive
  • sys.dm_exec_requests
  • Extended events
  • Activity Monitor
Q68:

Explain TempDB usage in detail.

Mid

Answer

TempDB stores temp tables, table variables, hash join work tables, version store, spills, and cursor data. Best practices include multiple equal-sized files and SSD storage.

Q69:

Explain the Write-Ahead Logging (WAL) mechanism.

Mid

Answer

Before modifying a data page, SQL writes the log record first. This ensures durability and crash recovery.

Q70:

Describe Transaction Log Architecture. Why does the log grow?

Mid

Answer

The log consists of Virtual Log Files (VLFs). Log grows due to long-running transactions, no log backups, replication delays, or index rebuilds inside transactions.

Q71:

Explain CHECKPOINT and why it is important.

Mid

Answer

CHECKPOINT flushes dirty pages to disk to reduce crash recovery time and manage buffer pressure.

Q72:

What is the difference between Full, Differential, and Log Backups?

Mid

Answer

Full: Entire database.

Differential: Changes since last full.

Log: Captures all log records since last log backup.

Q73:

Explain Recovery Models (Simple, Full, Bulk-Logged).

Mid

Answer

Simple: Auto-truncates log; no PIT recovery.

Full: Requires log backups; supports PIT recovery.

Bulk-Logged: Minimally logs operations.

Q74:

What is Tail Log Backup?

Mid

Answer

A backup taken before restoring a damaged DB to prevent data loss. Captures last active log records.

Q75:

Explain the Restore Sequence in SQL Server.

Mid

Answer

Restore order: Full ? Differential ? Logs. Use NORECOVERY until final restore, then RECOVERY.

Q76:

What is AlwaysOn Availability Groups? Explain components.

Mid

Answer

AG consists of primary replica, secondary replicas, listener, and modes (sync/async). Provides HA, DR, and readable secondaries.

Q77:

Explain Log Shipping.

Mid

Answer

Log Shipping copies log backups to a secondary server for restore. Simple, reliable, but no automatic failover.

Q78:

Explain Replication Types (Snapshot, Transactional, Merge).

Mid

Answer

Snapshot: Full copy; simple.

Transactional: Near real-time; best for reporting.

Merge: Bidirectional sync for disconnected systems.

Q79:

What is Database Mirroring? Why is it deprecated?

Mid

Answer

Mirroring uses principal + mirror + witness for automatic failover. Deprecated in favor of Availability Groups.

Q80:

What are Stored Procedures and why are they preferred over sending raw SQL from applications?

Senior

Answer

Stored procedures are precompiled program units that live inside SQL Server. Instead of sending raw SQL text for every request, the application sends only the procedure name and parameters. SQL Server then executes pre-validated logic using a cached execution plan.

Key advantages over raw ad-hoc SQL:

  • Reduced network traffic: Only parameter values are sent, not large query strings.
  • Plan reuse: SQL Server can cache and reuse execution plans for procedures, reducing compilation overhead and stabilizing performance.
  • Centralized business logic: Data-related rules (validation, transformations, audit operations) are centralized and versioned at the database level, simplifying deployments and multi-application integration.
  • Security and least-privilege: Applications can be granted EXECUTE rights on procedures instead of direct table access, improving security boundaries and auditability.
  • Reduced SQL injection risk: The SQL logic is fixed inside the procedure; parameters are bound, greatly decreasing injection surfaces compared to concatenated SQL strings.
Q81:

Explain the difference between Stored Procedures and Functions in SQL Server.

Senior

Answer

Stored procedures and functions both encapsulate reusable logic, but they serve different purposes and behave differently inside SQL Server.

Stored procedures:

  • Primarily designed to perform actions: DML operations, transaction control, administrative tasks.
  • Can return multiple result sets and output parameters.
  • Cannot be directly used in SELECT, WHERE, or JOIN clauses.
  • Commonly used as API endpoints from the application layer.

Functions:

  • Intended for computation and value-returning logic.
  • Scalar functions return a single value; table-valued functions return a rowset.
  • Can be used inside SELECT, WHERE, and JOIN clauses like expressions or tables.
  • Have tighter restrictions: no explicit transaction control; most cannot perform data-modifying side effects.

In short: procedures orchestrate operations, while functions compute values that integrate with queries.

Q82:

Explain Scalar Functions vs Inline Table-Valued Functions vs Multi-Statement Table-Valued Functions.

Senior

Answer

SQL Server supports several function types, each with distinct performance and usage characteristics.

Scalar functions:

  • Return a single scalar value (e.g., INT, VARCHAR).
  • Evaluated per row when used in queries, often resulting in RBAR (Row-By-Agonizing-Row) execution.
  • Act as black boxes to the optimizer, frequently preventing parallelism and leading to severe performance issues.

Inline table-valued functions (iTVFs):

  • Return a table defined by a single RETURN (SELECT ...) statement.
  • Logically similar to parameterized views.
  • Fully inlined into the calling query, enabling the optimizer to generate efficient set-based plans with good cardinality estimates.
  • Usually the best-performing function type for set-based logic.

Multi-statement table-valued functions (mTVFs):

  • Declare an internal table variable and populate it across multiple statements.
  • By default, SQL Server assumes a fixed row estimate (historically 1 row), often leading to poor plans.
  • Limited statistics and cardinality information, frequently causing spills and suboptimal joins.

For performance-sensitive code, prefer inline TVFs over scalar or multi-statement TVFs whenever possible.

Q83:

Why can Scalar Functions severely degrade performance?

Senior

Answer

Scalar functions often look clean and reusable but can be hidden performance killers in production workloads.

Reasons they degrade performance:

  • They are executed once per row in a result set, turning set-based operations into row-by-row computation.
  • The optimizer cannot see inside the function body, treating it as a black box. This prevents many optimizations and accurate cardinality estimates.
  • In many SQL Server versions, the presence of scalar functions in the SELECT or WHERE list can disable parallelism, forcing single-threaded plans for otherwise parallelizable queries.
  • They frequently increase CPU usage and query duration dramatically on large datasets.

Mitigations include:

  • Refactoring to inline TVFs or pure set-based SQL.
  • Replacing scalar functions with computed columns (possibly indexed) when appropriate.
  • Inlining logic into the main query where feasible.
Q84:

What are Views? Explain their benefits and limitations.

Senior

Answer

A view is a named, virtual table defined by a SELECT query. It does not store data itself (unless indexed) but presents a reusable query abstraction.

Benefits:

  • Abstraction and simplification: Hides complex joins and expressions behind a simple interface.
  • Security: Exposes only selected columns/rows while hiding underlying schemas and tables.
  • Reuse: Centralizes business logic or filter logic so that many queries can benefit from a single definition.
  • Schema evolution: Application code can query the view even if underlying tables change, as long as the view's contract is preserved.

Limitations:

  • Views can become stacked (views on views), leading to overly complex, hard-to-tune execution plans.
  • Not all views are updatable; complex joins, aggregates, and DISTINCT can prevent direct DML.
  • They do not inherently improve performance unless combined with indexed views or used to encapsulate optimal query patterns.
Q85:

What are Indexed Views and when should you use them?

Senior

Answer

Indexed views materialize the result of a view and maintain it on disk like a physical table. They are backed by a clustered index and optionally additional nonclustered indexes.

When to use:

  • Heavy, repetitive aggregations or joins over large datasets (e.g., reporting queries over transactional tables).
  • Scenarios where read performance is critical and data changes are relatively moderate.
  • To precompute expensive expressions and reduce CPU usage under analytic workloads.

Trade-offs:

  • Every insert/update/delete on the underlying tables must update the indexed view, increasing DML overhead.
  • Strict requirements apply (e.g., schema binding, deterministic expressions).
  • Can complicate troubleshooting if developers are unaware of their presence.

They are powerful for read-intensive workloads but should be used selectively and measured carefully in write-heavy systems.

Q86:

What are Triggers? Explain AFTER vs INSTEAD OF Triggers.

Senior

Answer

Triggers are special stored procedures that automatically execute in response to DML events (INSERT, UPDATE, DELETE) or certain DDL operations.

AFTER triggers:

  • Fire after the base operation logically succeeds but before the transaction commits.
  • Often used for auditing, logging, or enforcing complex constraints that span multiple tables.
  • Run within the transaction, so failures can cause rollbacks and increase latency.

INSTEAD OF triggers:

  • Fire instead of the original DML operation.
  • Commonly used on views to simulate complex update logic or route changes to multiple underlying tables.
  • Give full control over how changes are applied.

Both types must be designed carefully to avoid recursion, hidden performance issues, and unexpected side effects.

Q87:

Why are Triggers generally not preferred in large systems?

Senior

Answer

Triggers are powerful but can introduce hidden complexity and performance problems in large systems.

Key concerns:

  • Hidden behavior: Business logic executes implicitly on DML, making the system harder to reason about. Developers may not realize why certain actions occur.
  • Chained effects: Triggers can call other triggers, causing cascading side effects that are difficult to debug and test.
  • Performance impact: Trigger logic runs inside the transaction. Long-running triggers increase lock durations and contention.
  • Maintainability: Logic spread across triggers and procedures leads to fragmented business rules and operational risk.

For these reasons, many teams prefer stored procedures, constraints, and explicit application-level logic over heavy trigger usage, reserving triggers for focused, unavoidable use cases (e.g., auditing).

Q88:

Explain Window Functions and why they are essential in modern SQL.

Senior

Answer

Window functions (e.g., ROW_NUMBER(), RANK(), SUM() OVER (...)) allow calculations across sets of rows without collapsing them into a single result like GROUP BY does.

Why they are essential:

  • Enable ranking, running totals, moving averages, percentiles, and gap/density analysis in a single pass.
  • Reduce the need for self-joins and correlated subqueries, often resulting in cleaner and faster plans.
  • Can be combined with PARTITION BY and ORDER BY to support rich analytical queries directly in OLTP or reporting databases.
  • Help keep logic set-based and push computation into the database layer where it is highly optimized.
Q89:

Explain PARTITION BY and ORDER BY within Window Functions.

Senior

Answer

Window functions operate over a logical window of rows, defined by PARTITION BY and ORDER BY clauses in the OVER() expression.

PARTITION BY:

  • Divides the result set into groups (partitions) for independent calculations, similar to grouping in analytics.
  • Example: SUM(SalesAmount) OVER (PARTITION BY CustomerId) gives total sales per customer.

ORDER BY:

  • Defines the sequence of rows within each partition.
  • Required for ranking and running calculations like ROW_NUMBER(), cumulative sums, and moving averages.

In summary, PARTITION BY defines the scope and ORDER BY defines the sequence of the window.

Q90:

What is Table Partitioning? How does SQL Server implement it?

Senior

Answer

Table partitioning horizontally splits a large table into smaller, manageable partitions based on a partition key (often date or range-based IDs). Logically it remains a single table, but physically data is separated.

SQL Server implementation:

  • Partition function: Defines boundary points for key ranges.
  • Partition scheme: Maps partitions to one or more filegroups.
  • The table or index is created ON the partition scheme, distributing rows across partitions based on the key.

Benefits:

  • Maintenance operations (index rebuilds, statistics, archiving) can be targeted at specific partitions.
  • Supports sliding window load/archive patterns via partition SWITCH operations.
  • Can improve query performance via partition elimination, especially for time-sliced workloads.

Partitioning is not automatically faster; it must align with query predicates and maintenance strategies to be effective.

Q91:

Explain Partition Elimination with an example scenario.

Senior

Answer

Partition elimination occurs when SQL Server restricts I/O to only those partitions that might contain relevant rows, instead of scanning all partitions.

Example: A table partitioned by OrderDate by month. A query filtered on OrderDate BETWEEN '2024-01-01' AND '2024-01-31' can read only the January partition if:

  • The filter is directly on the partition key.
  • There are no non-SARGable expressions on the partition column.
  • The data types and collation match exactly.

If functions like CONVERT() or different data types are used on the partition column, partition elimination may fail and all partitions may be scanned, losing the performance benefit.

Q92:

What are the typical causes of poor execution plan performance?

Senior

Answer

Poor execution plans are usually symptoms of deeper issues in schema design, statistics, or query patterns.

Common causes:

  • Missing or inappropriate indexes: Forcing table scans or expensive lookups.
  • Stale or missing statistics: Leading to incorrect row estimates and wrong join strategies.
  • Parameter sniffing: Plan optimized for one parameter value, reused for others with different data distributions.
  • Scalar functions and multi-statement TVFs: Preventing optimization and parallelism.
  • Complex views over views: Obscure actual data access and create bloated plans.
  • Implicit conversions: Causing non-SARGable predicates or index misses.
  • RBAR patterns (cursors, loops): Neglecting set-based approaches.

Effective tuning often involves query simplification, better indexing, and statistics maintenance rather than just tweaking server settings.

Q93:

What is Parameter Sniffing and how do you handle it?

Senior

Answer

Parameter sniffing occurs when SQL Server compiles a plan using the initial parameter values it sees, then reuses that plan for subsequent executions. If data distribution is skewed, one plan may not fit all parameter scenarios.

Symptoms: Some calls are lightning fast, others very slow, using the same procedure and query shape.

Handling strategies:

  • Use OPTION (RECOMPILE) for highly skewed queries where compilation cost is acceptable.
  • Use OPTIMIZE FOR UNKNOWN or OPTIMIZE FOR (@param = ...) hints to choose more robust plans.
  • Capture parameters in local variables inside the procedure to discourage sniffing and produce more average plans.
  • Split logic into separate procedures for different parameter ranges if patterns are distinct.
  • Use Query Store to force stable plans when appropriate.
Q94:

Explain Execution Plan Caching and Reuse.

Senior

Answer

SQL Server uses a plan cache to store compiled execution plans so that subsequent executions of the same (or parameterized) queries can skip the compilation phase.

Benefits:

  • Reduces CPU overhead from repeated compilations.
  • Improves response time for frequently executed queries and stored procedures.

Challenges:

  • Poorly parameterized or ad-hoc queries can cause plan cache bloat, with many single-use plans.
  • Parameter sniffing issues stem from plan reuse across different parameter values.
  • Schema changes or statistics updates can invalidate plans, causing recompilation spikes.

Best practice is to use parameterized queries, monitor plan cache size and reuse patterns, and leverage Query Store to manage problematic plans.

Q95:

What is Schema Binding and why is it important?

Senior

Answer

Schema binding associates a view or function tightly with the underlying schema so that the referenced objects cannot be modified in incompatible ways without first changing the bound object.

Importance:

  • Required for indexed views and some computed columns, ensuring structural stability.
  • Prevents accidental changes (e.g., dropping or altering columns) that would silently break dependent logic.
  • Helps enforce contracts between application code, views, and functions.

In high-governance environments, schema binding is a tool to ensure that refactoring is deliberate and fully impact-assessed.

Q96:

Explain the concept of set-based operations vs row-by-row operations.

Senior

Answer

SQL is fundamentally a set-based language: it is designed to operate on collections of rows at once, not one row at a time.

Set-based operations:

  • Leverage the optimizer to choose efficient algorithms (hash joins, merges, batch operations).
  • Use fewer logical and physical reads for bulk operations.
  • Scale more gracefully as data volume grows.

Row-by-row (RBAR) operations:

  • Implement logic via cursors or loops, processing one row at a time.
  • Usually lead to excessive context switching, locking overhead, and long runtimes.
  • Only justified for very complex, inherently procedural business rules.

Senior-level SQL design focuses on transforming requirements into set-based patterns whenever possible, often with window functions, joins, and properly designed queries.

Q97:

What are Cursors and why are they usually discouraged?

Senior

Answer

Cursors are database objects that allow row-by-row traversal of a result set, similar to iterators in procedural languages.

Reasons they are discouraged:

  • They process data one row at a time, leading to poor performance on large sets.
  • They often hold locks for a long time, reducing concurrency.
  • They introduce complex, procedural logic that is harder to maintain and test.
  • They typically have higher memory and tempdb overhead compared to set-based alternatives.

Cursors should be a last resort, used only when set-based solutions are impractical or impossible. In many cases, window functions, MERGE statements, or carefully written set-based updates can replace cursor logic.

Q98:

Explain the difference between Physical Reads, Logical Reads, and Page Life Expectancy.

Senior

Answer

These metrics help diagnose I/O and memory performance in SQL Server.

Logical reads: The number of 8KB pages read from the buffer cache (memory). High logical reads can indicate inefficient queries or missing indexes.

Physical reads: Pages read from disk because they were not found in the buffer cache. Physical I/O is orders of magnitude slower than memory access.

Page Life Expectancy (PLE): An indicator of how long pages stay in the buffer cache before being evicted. A consistently low PLE suggests memory pressure or inefficient queries repeatedly flushing the cache.

Senior engineers use these metrics together to determine whether to focus on query tuning, indexing, or adding memory/storage capacity.

Q99:

What is SARGability and why is it critical for performance?

Senior

Answer

SARGability (Search ARGument-ability) describes whether a predicate can efficiently use an index to seek rows instead of scanning.

SARGable predicates:

  • Simple comparisons like Column = @Value, Column >= @Start AND Column <= @End.
  • Allow the optimizer to perform index seeks and range scans.

Non-SARGable patterns:

  • Applying functions to columns: WHERE LOWER(Col) = 'abc'.
  • Expressions on the left side: WHERE Col + 1 = 10.
  • Implicit conversions that change the column's data type.
  • Leading wildcards: LIKE '%abc'.

Ensuring predicates are SARGable is one of the most impactful techniques in query tuning: it allows indexes to be used effectively, minimizing reads and dramatically improving performance.

Q100:

What Is TempDB and Why Is It Critical for SQL Server Performance?

Senior

Answer

TempDB is SQL Server’s global workspace used by all users and internal operations. It is essential because it handles:

  • Sorting operations (ORDER BY, GROUP BY)
  • Hash joins and aggregations
  • Row versioning for snapshot isolation
  • Temporary tables and table variables
  • Intermediate spill data during execution
  • Cursors and internal worktables

If TempDB becomes slow or suffers contention, the entire SQL Server instance slows down. Proper sizing, fast storage, and multiple data files are critical for performance.

Q101:

What Is TempDB Contention and What Causes It?

Senior

Answer

TempDB contention occurs when multiple threads compete for the same allocation pages (PFS, GAM, SGAM) or metadata structures.

Typical causes include:

  • Too few TempDB data files
  • Heavy use of temp tables/table variables
  • Large sorting or hashing operations
  • High row versioning pressure

Fixes: increase file count, equal-size files, optimize workload, reduce spills.

Q102:

What Are Data Pages and Index Pages in SQL Server?

Senior

Answer

SQL Server stores all data in 8 KB pages.

  • Data pages store full table rows.
  • Index pages store B-tree navigation structures and pointers.

Understanding pages explains index fragmentation, logical reads, and physical I/O behavior.

Q103:

What Is the Buffer Pool and How Does SQL Server Use It?

Senior

Answer

The buffer pool is SQL Server’s memory area used for caching data and index pages.

  • Logical reads come from memory.
  • Physical reads occur only when data is missing in cache.
  • High buffer reuse improves performance.

If buffer pool is too small: more disk reads, lower PLE, and slower query execution.

Q104:

What Is Page Life Expectancy (PLE) and What Does It Indicate?

Senior

Answer

Page Life Expectancy (PLE) measures how long pages remain in the buffer pool before eviction.

High PLE = good memory health.
Low PLE = memory pressure, excessive physical reads.

Common causes of low PLE: bad queries, missing indexes, large scans, spills.

Q105:

How Does the SQL Server Transaction Log Work Internally?

Senior

Answer

The transaction log is an append-only structure that records all modifications. SQL Server uses Write-Ahead Logging (WAL):

  • Log records are written to disk first
  • Data pages update later

This ensures atomicity and durability. Log truncation depends on checkpoints and active transactions.

Q106:

What Is the Purpose of Checkpoints in SQL Server?

Senior

Answer

Checkpoints flush dirty pages (modified pages) from memory to disk.

Benefits:

  • Shorter crash recovery time
  • Enables transaction log truncation
  • Reduces buffer pool pressure
Q107:

What Are Dirty Pages and Clean Pages in SQL Server?

Senior

Answer

Dirty pages: modified in buffer pool but not yet persisted to disk.
Clean pages: identical to the disk version.

Checkpoints convert dirty pages to clean pages. Too many dirty pages increase recovery time and degrade performance.

Q108:

What Are Latches and How Do They Differ From Locks?

Senior

Answer

Latches protect internal memory structures and are non-transactional.

Locks protect logical data consistency and last for transaction duration.

Latch waits = engine pressure.
Lock waits = concurrency or blocking issues.

Q109:

What Is Lock Escalation and How Does It Impact Performance?

Senior

Answer

Lock escalation converts many row/page locks into a single table lock to reduce overhead.

Impact:

  • Fewer locks = lower memory use
  • But higher blocking risk
Q110:

What Is the Cardinality Estimator and Why Is It Important?

Senior

Answer

The Cardinality Estimator (CE) predicts row counts used to choose join types, memory grants, and access paths.

Poor estimates ? poor execution plans.

CE depends on statistics, data distribution, and query structure.

Q111:

What Are Statistics in SQL Server and How Do They Affect Performance?

Senior

Answer

Statistics describe data distribution that SQL Server uses to estimate row counts.

Bad or outdated stats ? misestimation ? poor plans ? slow queries.

Q112:

What Are Memory Grants and Why Do They Matter?

Senior

Answer

SQL Server allocates memory for sorting, hashing, and aggregations.

Incorrect grants:

  • Too small ? spills to TempDB
  • Too large ? resource starvation
Q113:

What Is a Spill to TempDB and Why Does It Occur?

Senior

Answer

A spill happens when the memory grant is too small and SQL Server must offload intermediate data to TempDB.

Common causes: inaccurate cardinality estimation, heavy sorts, hash joins, window functions.

Q114:

What Is the SQL Server Query Processor?

Senior

Answer

The Query Processor:

  • Parses T-SQL
  • Binds objects
  • Optimizes using cost-based rules
  • Generates physical plans
Q115:

What Are Logical and Physical Operators in Execution Plans?

Senior

Answer

Logical operators represent conceptual operations (join, project, filter).

Physical operators are real engine implementations (hash join, nested loop, merge join).

Q116:

How Does SQL Server Choose Join Algorithms?

Senior

Answer

Join selection depends on:

  • Row count estimates
  • Sort order
  • Indexes
  • Estimated CPU and I/O cost
Q117:

What Is a Hash Match and Why Is It Heavy on TempDB?

Senior

Answer

A hash match creates an in-memory hash table for joins and aggregates.

If memory is insufficient ? spills to TempDB ? huge slowdown.

Q118:

What Are Log Buffers and How Do Log Flushes Work?

Senior

Answer

Changes go to an in-memory log buffer. A log flush occurs:

  • On transaction commit
  • When buffer fills
  • On checkpoint

Slow log storage = slow commits.

Q119:

What Is the Difference Between Row Store and Column Store in SQL Server?

Senior

Answer

Row store: row-by-row storage; best for OLTP.

Column store: column-by-column storage; best for analytics, high compression, parallel execution.

Q120:

What Is Query Tuning and Why Is It Necessary in SQL Server?

Expert

Answer

Query tuning is the process of optimizing SQL statements so they consume fewer CPU cycles, memory, I/O, and locking resources. Poorly written queries slow down the entire SQL Server instance by overusing limited system resources.

The goal is predictable, scalable performance by reducing logical reads, avoiding unnecessary work, using optimal joins, minimizing spills, and improving concurrency.

Q121:

What is Query Tuning, and why is it necessary in SQL Server?

Expert

Answer

Query tuning is the process of analyzing how SQL Server executes a query and optimizing it to minimize resource usage. Because SQL Server has finite CPU, memory, I/O, and concurrency capacity, an inefficient query can slow down the entire system. Tuning ensures optimal execution paths, fewer logical reads, reduced data movement, faster joins, and short lock durations. The goal is predictable, scalable performance.

Q122:

What is the difference between Estimated and Actual Execution Plans?

Expert

Answer

The estimated plan shows SQL Server's predicted execution strategy based on statistics before running the query. The actual plan shows what really happened: row counts, spills, memory usage, and operator execution. Estimated plans are safe for production; actual plans reveal real bottlenecks like bad estimates, scans, or sorts. Both are essential for diagnosing performance issues.

Q123:

Why do row estimation errors cause performance issues?

Expert

Answer

SQL Server uses estimated row counts to choose join types, memory grants, and index strategies. If estimates are inaccurate, SQL Server may choose poor plans. Overestimation causes excessive memory allocation, while underestimation causes spills, nested loops, and excessive lookups. Accurate cardinality estimation is fundamental to stable performance.

Q124:

What happens when a query spills to TempDB?

Expert

Answer

A query spills when SQL Server lacks sufficient memory for operations like sort or hash. SQL offloads intermediate results to TempDB, causing heavy disk I/O and slow execution. Frequent spills indicate bad estimates, missing indexes, or insufficient memory. Repeated spills degrade overall SQL Server performance.

Q125:

What is Parameter Sniffing, and how does it affect performance?

Expert

Answer

Parameter sniffing allows SQL Server to reuse cached execution plans based on the first parameter values supplied. If those values are atypical, the plan may be inefficient for later executions. This leads to performance instability. Solutions include OPTION(RECOMPILE), OPTIMIZE FOR UNKNOWN, local variables, or plan forcing when appropriate.

Q126:

Why is choosing the correct JOIN type critical for performance?

Expert

Answer

Join type determines how SQL Server matches rows. Nested loops are ideal for small sets, merge joins require sorted input, and hash joins work on large, unsorted sets. Incorrect join choices cause excessive I/O, CPU load, and slowdowns. SQL chooses join types based on row estimates and indexes, making accurate statistics essential.

Q127:

What causes Hash Match operations, and why can they become bottlenecks?

Expert

Answer

Hash matches occur when SQL Server must build a memory-based hash table for joins or aggregates. They appear when inputs are unsorted or not indexed. They become bottlenecks when memory is insufficient, causing spills to TempDB. Hash operations can be CPU-intensive and degrade concurrency.

Q128:

How do SARGable expressions influence performance?

Expert

Answer

A SARGable expression allows SQL Server to use indexes efficiently. Non-SARGable predicates (functions on columns, mismatched types) force scans, increasing logical reads and slowing queries. SARGability is foundational for scaling queries on large datasets.

Q129:

Why is reducing logical reads more important than reducing elapsed time?

Expert

Answer

Elapsed time varies with workload and server load, but logical reads measure actual data touched. Reducing logical reads consistently reduces CPU, I/O, and cache pressure. Logical reads are the primary, stable metric for performance tuning.

Q130:

What is the importance of covering indexes in performance tuning?

Expert

Answer

A covering index includes all columns a query needs. This eliminates key lookups and reduces I/O. Covering indexes provide dramatic performance improvements in OLTP systems by minimizing data access and improving plan efficiency.

Q131:

When does SQL Server choose Index Seek over Index Scan?

Expert

Answer

SQL Server chooses an index seek when predicates match the index key order and are selective. Otherwise, SQL chooses a scan to avoid expensive random lookups. Seek-friendly index design is critical for optimal performance.

Q132:

What causes Key Lookups, and why are they expensive?

Expert

Answer

Key lookups occur when a non-clustered index lacks required columns. SQL must fetch missing columns from the clustered index for each qualifying row. With many rows, this becomes slow and I/O-heavy. Solutions include covering indexes or query redesign.

Q133:

What is the purpose of Statistics in SQL Server performance?

Expert

Answer

Statistics describe column data distribution. SQL Server uses them to estimate row counts and choose efficient plans. Outdated or missing statistics lead to poor estimates and unstable performance. Keeping statistics fresh is essential for reliable query optimization.

Q134:

Why do implicit conversions degrade performance?

Expert

Answer

Implicit conversions prevent index usage by forcing SQL Server to convert values at runtime. This leads to scans instead of seeks, higher CPU usage, and slower joins. Matching data types between columns and parameters is vital.

Q135:

How does high fragmentation affect performance?

Expert

Answer

Fragmentation scatters index pages, causing more I/O during seeks and reducing read-ahead efficiency. High fragmentation slows down queries and increases disk activity. Regular index maintenance helps restore performance.

Q136:

Why is TempDB optimization critical for performance tuning?

Expert

Answer

TempDB handles sorts, hashes, temp tables, versioning, and spill operations. Heavy workloads can cause latch contention and I/O bottlenecks. Optimizing TempDB improves concurrency and stabilizes system-wide performance.

Q137:

What are Memory Grants, and why do they impact performance?

Expert

Answer

Memory grants are allocations for joins, sorts, and aggregates. Overestimated grants waste memory; underestimated grants cause spills to TempDB. Accurate row estimation and indexing ensure proper memory usage.

Q138:

How does excessive recompilation affect query performance?

Expert

Answer

Excessive recompilation forces SQL Server to repeatedly rebuild execution plans, increasing CPU consumption and causing unpredictable performance. Causes include volatile schema, unstable statistics, and widely varying parameters.

Q139:

Why do large IN clauses degrade performance?

Expert

Answer

Large IN lists complicate cardinality estimation and increase parsing overhead. SQL Server may misestimate row counts, leading to inefficient join choices. Using temp tables or joins improves accuracy and performance.

Q140:

What is the significance of identifying the most expensive operators in execution plans?

Expert

Answer

Most query time is spent in one or two heavy operators like hash joins, sorts, or scans. Identifying these bottlenecks allows focused tuning efforts, resulting in faster optimization and greater performance gains.

Q141:

What are the most common root causes of slow queries in production?

Expert

Answer

Slow production queries usually result from:

  • Missing or poorly designed indexes
  • Incorrect cardinality estimates
  • High fragmentation
  • Outdated statistics
  • Parameter sniffing issues
  • Excessive key lookups
  • TempDB contention or spills
  • Implicit conversions
  • Heavy sorts/hashes causing CPU pressure
  • Blocking or deadlocks

Most performance issues are caused by a combination rather than a single factor.

Q142:

How do you identify which query is slowing down the system?

Expert

Answer

To identify slow queries, examine:

  • sys.dm_exec_query_stats – CPU, I/O, execution time
  • sys.dm_exec_requests – currently running queries
  • sys.dm_tran_locks – blocking analysis
  • Extended Events – deep tracing
  • Query Store – historical regressions

This helps pinpoint which query and which operator is causing server slowdown.

Q143:

How do you detect blocking issues and understand who is blocking whom?

Expert

Answer

Blocking is identified using:

  • sys.dm_exec_requests – blocking_session_id
  • sys.dm_os_waiting_tasks – wait chains
  • Activity Monitor – high-level visualization

Blocking chains often originate from a long-running transaction holding critical locks.

Q144:

What is the difference between blocking and deadlocking?

Expert

Answer

Blocking occurs when one transaction holds a lock and others wait behind it.

Deadlock occurs when two transactions wait on each other, creating a cycle.

SQL Server detects deadlocks and kills one transaction (the victim) to resolve the cycle.

Q145:

Which indexing mistakes cause the most issues in high-traffic systems?

Expert

Answer

Common indexing mistakes include:

  • Too many non-clustered indexes
  • Lack of covering indexes for hot queries
  • Poor clustered key choice (wide, GUID, non-sequential)
  • Duplicate or overlapping indexes
  • Not using filtered indexes

These increase I/O, fragmentation, and reduce plan efficiency.

Q146:

Why do large table scans cause severe slowdowns?

Expert

Answer

Large scans:

  • Generate heavy I/O
  • Evict useful pages from buffer pool
  • Increase CPU usage
  • Slow concurrent users

One bad scan can impact dozens of users in a production environment.

Q147:

Which issues cause sudden regression in query performance?

Expert

Answer

Sudden regressions often occur due to:

  • Updated statistics
  • Parameter sniffing plan changes
  • Index changes
  • Fragmentation spikes
  • Failover causing cold cache
  • Data growth affecting plan shapes
Q148:

What is the importance of Query Store in high-traffic systems?

Expert

Answer

Query Store provides:

  • Plan history recording
  • Runtime performance metrics
  • Ability to revert to stable plans
  • Insight into parameter sniffing behavior

It acts as a "black box recorder" for SQL Server.

Q149:

Why should TempDB be placed on fast storage?

Expert

Answer

TempDB handles:

  • Sorts
  • Hash operations
  • Version store
  • Spills
  • Temporary objects

Slow TempDB I/O becomes a bottleneck for the entire SQL Server instance.

Q150:

Why are wide clustered keys a long-term performance problem?

Expert

Answer

The clustered key is included in all non-clustered indexes. Wide keys increase:

  • Storage footprint
  • Read/write cost
  • Fragmentation
  • Memory usage

Narrow, sequential keys are ideal.

Q151:

How do you troubleshoot excessive CPU usage in SQL Server?

Expert

Answer

Troubleshooting steps:

  • Identify high-CPU queries in DMVs
  • Check plans for sorts, hashes, conversions
  • Validate statistics
  • Review index usage
  • Investigate parallelism waits
  • Check server MAXDOP settings
Q152:

What causes excessive memory usage in SQL Server?

Expert

Answer

Causes include:

  • Large memory grants
  • Hash and sort operations
  • Poor cardinality estimates
  • Too many concurrent queries
  • SQL caching large data pages
Q153:

Why are CROSS JOINs dangerous in real systems?

Expert

Answer

CROSS JOINs produce Cartesian products. They may:

  • Explode row counts
  • Consume massive CPU
  • Stress memory
  • Cause system freezes

They must be used intentionally and carefully.

Q154:

Which situations require table partitioning?

Expert

Answer

Use partitioning when:

  • Tables contain hundreds of millions of rows
  • Fast archiving is required
  • Hot/cold data separation is needed
  • Maintenance takes too long
Q155:

How does SQL Server handle large deletes efficiently?

Expert

Answer

Large deletes cause heavy logging, lock escalation, and blocking. Solutions:

  • Batch deletes
  • Partition switching
  • Mark-and-archive patterns
Q156:

Why do developers use OUTPUT clause for auditing?

Expert

Answer

The OUTPUT clause returns affected rows without re-querying the table. It is efficient for:

  • Auditing changes
  • Logging
  • Data replication
Q157:

What is the impact of missing foreign keys?

Expert

Answer

Missing foreign keys:

  • Prevent automatic referential integrity
  • Hurt query optimization because relationships are unknown
  • Cause poor join strategy selection
Q158:

Why minimize the number of columns in non-clustered indexes?

Expert

Answer

Extra columns increase:

  • Index size
  • Maintenance overhead
  • Memory usage

Indexes should be narrow and purpose-specific.

Q159:

When should filtered indexes be used?

Expert

Answer

Filtered indexes are ideal when:

  • Queries target selective subsets
  • Columns contain sparse values
  • Large tables need optimized predicate performance
Q160:

What steps do you take before rewriting a slow query?

Expert

Answer

Before rewriting a query:

  • Check execution plan
  • Validate statistics
  • Check indexes
  • Identify costly operators
  • Try small predicate adjustments
  • Confirm rewrite is necessary

Most problems are fixed by plan adjustments, not rewrites.

Curated Sets for MS SQL

No curated sets yet. Group questions into collections from the admin panel to feature them here.

Ready to level up? Start Practice