Skip to main content

Expert C# Interview Questions

Curated Expert-level C# interview questions for developers targeting expert positions. 58 questions available.

Last updated:

C# Interview Questions & Answers

Skip to Questions

Welcome to our comprehensive collection of C# interview questions and answers. This page contains expertly curated interview questions covering all aspects of C#, from fundamental concepts to advanced topics. Whether you're preparing for an entry-level position or a senior role, you'll find questions tailored to your experience level.

Our C# interview questions are designed to help you:

  • Understand core concepts and best practices in C#
  • Prepare for technical interviews at all experience levels
  • Master both theoretical knowledge and practical application
  • Build confidence for your next C# interview

Each question includes detailed answers and explanations to help you understand not just what the answer is, but why it's correct. We cover topics ranging from basic C# concepts to advanced scenarios that you might encounter in senior-level interviews.

Use the filters below to find questions by difficulty level (Entry, Junior, Mid, Senior, Expert) or focus specifically on code challenges. Each question is carefully crafted to reflect real-world interview scenarios you'll encounter at top tech companies, startups, and MNCs.

Questions

58 questions
Q1:

What is the Common Language Runtime (CLR), and why is it essential for C#?

Expert

Answer

The CLR is the execution engine for .NET applications. It manages memory, performs JIT compilation, enforces type safety, handles exceptions, and manages garbage collection.

It abstracts low-level memory management and provides a secure, consistent runtime environment, enabling cross-language interoperability and automatic memory management.

Quick Summary: The CLR is the execution engine for .NET. It handles JIT compilation (IL to machine code), garbage collection, type safety enforcement, exception handling, and thread management. Every C# program runs on top of the CLR — it is what makes .NET cross-platform and memory-safe.
Q2:

How does the JIT compiler improve runtime performance in C#?

Expert

Answer

The JIT compiler converts IL to machine code just before execution. It applies hardware-specific optimizations such as inlining, dead-code elimination, and CPU-tuned instruction generation.

It may recompile methods dynamically based on runtime execution patterns, making performance adaptive.

Quick Summary: The JIT converts IL to machine code right before execution and applies hardware-specific optimizations like method inlining, dead code elimination, and CPU-tuned instructions. It may recompile hot methods (tiered compilation) with even more aggressive optimizations as the app runs.
Q3:

Why is C# considered a type-safe language?

Expert

Answer

C# enforces strict type-checking, prevents unsafe casts, limits pointer operations, and ensures invalid operations are caught at compile time or enforced by the CLR.

This eliminates memory corruption, improves security, and reduces runtime crashes.

Quick Summary: C# enforces strict type checking at compile time and the CLR enforces it at runtime. You cannot accidentally treat a string as an integer, cast incompatible types without an explicit cast, or corrupt memory like in C. This makes the language predictable and eliminates whole categories of bugs.
Q4:

What is the difference between managed and unmanaged resources?

Expert

Answer

Managed resources are controlled by the CLR (objects, arrays, strings), while unmanaged resources (file handles, sockets, DB connections) must be released manually via IDisposable.

Failure to release unmanaged resources leads to leaks and OS-level exhaustion.

Quick Summary: Managed resources are controlled by the CLR — objects, arrays, strings — the GC cleans them up automatically. Unmanaged resources are outside the CLR — file handles, sockets, DB connections — you must release these manually using IDisposable. Forgetting to do so causes resource leaks.
Q5:

Why does C# support nullable reference types, and what problem do they solve?

Expert

Answer

Nullable reference types prevent null reference exceptions by enforcing compiler analysis and warnings for unsafe null operations.

Quick Summary: Nullable reference types (C# 8+) make null an explicit opt-in. You declare string? to say "this can be null" and string to mean "this should never be null." The compiler warns you when you dereference something that might be null. This catches null reference exceptions before they happen at runtime.
Q6:

What is the significance of the volatile keyword?

Expert

Answer

volatile ensures read/write operations always use the most current value and prevents CPU/compiler reordering.

It is essential for correctness in low-level, lock-free synchronization.

Quick Summary: volatile ensures that reads and writes to a field always go directly to memory — the CPU and compiler cannot cache or reorder them. Without it, one thread might read a stale cached value that another thread already updated. Use it for simple shared flags between threads.
Q7:

What is the difference between process memory and managed heap memory?

Expert

Answer

Process memory includes stack, heap, DLL modules, native allocations, and OS buffers.

Managed heap is only the CLR-controlled memory region used for .NET objects and garbage collection.

Quick Summary: Process memory is everything the OS allocates to your app — stack, heap, DLL code, native buffers, OS handles. Managed heap is just the CLR-controlled subset where .NET objects live and the GC operates. All managed heap is part of process memory, but not all process memory is managed heap.
Q8:

How does string immutability help performance and safety?

Expert

Answer

String immutability enables thread safety, prevents accidental mutation, and supports optimizations like interning.

Quick Summary: Because strings are immutable, they are inherently thread-safe — no locks needed when reading across threads. The runtime can also intern string literals (reuse the same object for equal strings) to save memory. The downside: every modification creates a new string, so use StringBuilder for heavy concatenation.
Q9:

What is the role of metadata in .NET assemblies?

Expert

Answer

Metadata contains structured information about types, members, attributes, and versioning.

It powers reflection, type loading, serialization, security, and tooling support.

Quick Summary: Metadata in a .NET assembly describes everything about your types — class names, method signatures, parameter types, attributes, generic constraints. The CLR uses it for JIT compilation, type loading, reflection, and serialization. It is what makes .NET assemblies self-describing without needing header files.
Q10:

Why is boxing considered an expensive operation?

Expert

Answer

Boxing allocates heap memory, copies value types into objects, and causes GC overhead.

Unboxing additionally requires type checking, making both operations costly.

Quick Summary: Boxing allocates a new object on the heap and copies the value into it — that is a heap allocation plus GC tracking overhead. Unboxing requires a type check plus a copy back out. In a tight loop or large collection, thousands of box/unbox operations visibly degrade performance. Use generics to avoid it.
Q11:

What is the purpose of the weak reference concept?

Expert

Answer

Weak references allow referencing an object without preventing GC from collecting it.

They are used in caching and memory-sensitive scenarios.

Quick Summary: A weak reference lets you reference an object without preventing the GC from collecting it. If memory is low, the GC can reclaim the object even though you hold a weak reference. Useful for caches where you want to keep data if possible but not force it to stay alive.
Q12:

What is reflection, and what are its risks?

Expert

Answer

Reflection enables runtime inspection and invocation of types but is slower, bypasses compile-time type checks, exposes internals, and increases security risks.

Quick Summary: Reflection lets you inspect types, methods, and properties at runtime and invoke them dynamically. The risks: it is significantly slower than direct calls, it bypasses compile-time type safety, it can expose private members, and it makes code harder to refactor and analyze statically.
Q13:

How does garbage collection deal with circular references?

Expert

Answer

The .NET GC uses reachability analysis, not reference counting. Circular references pose no issue unless unreachable from GC roots.

Quick Summary: The .NET GC uses reachability analysis, not reference counting. It traces from GC roots (static fields, stack variables, CPU registers) and marks everything reachable. Circular references are not a problem — if neither object in the cycle is reachable from a root, both get collected.
Q14:

What is the purpose of the memory barrier in multithreading?

Expert

Answer

Memory barriers prevent reordering and enforce visibility guarantees across threads. They are essential in lock-free algorithms.

Quick Summary: A memory barrier prevents the CPU and compiler from reordering instructions across it. Without barriers, one thread might write data in a different order than you expect, and another thread reads it in that wrong order. The lock keyword implicitly adds full memory barriers on entry and exit.
Q15:

Why does C# support attributes, and how do they enhance program design?

Expert

Answer

Attributes add metadata used by frameworks for configuration, validation, serialization, and behavioral modification.

Quick Summary: Attributes let frameworks drive behavior through metadata instead of hardcoded logic. [Required] triggers validation, [HttpGet] configures routing, [JsonIgnore] controls serialization — all without touching the framework's source code. They separate configuration from logic cleanly.
Q16:

What is the difference between a value type stored on the stack vs. boxed value on the heap?

Expert

Answer

Stack value types allocate fast with no GC overhead.

Boxed values allocate on the heap, increasing memory usage and GC costs.

Quick Summary: A stack value type is allocated in the method's stack frame — no heap allocation, no GC tracking, zero overhead when the method returns. A boxed value lives on the heap — it needs a GC header, is tracked by the collector, and adds allocation pressure. Stack allocation is always cheaper.
Q17:

What is type erasure, and does C# use it?

Expert

Answer

Type erasure removes generic type information at compile time (Java). C# retains generic metadata at runtime, improving performance and type safety.

Quick Summary: Type erasure (used in Java) removes generic type info at compile time — at runtime, List and List are the same List. C# does the opposite — it retains generic type info at runtime. This means better performance (no boxing for value types), accurate reflection, and safer runtime behavior.
Q18:

What is the purpose of AppDomain in .NET?

Expert

Answer

AppDomains provided isolation and sandboxing in .NET Framework. .NET Core replaces them with AssemblyLoadContext.

Quick Summary: In .NET Framework, AppDomains provided isolation between parts of an application in the same process — you could load and unload assemblies independently. In .NET Core they were removed because they were too complex and slow. AssemblyLoadContext is the modern replacement for dynamic assembly loading.
Q19:

Why is exception handling considered expensive, and how should it be used?

Expert

Answer

Exceptions trigger stack unwinding, metadata discovery, and context capturing, making them significantly slower than normal execution.

They should be used only for true exceptional situations—not control flow.

Quick Summary: Exceptions trigger stack unwinding, handler lookup, and context capture — all expensive operations compared to normal code execution. Use them only for genuinely exceptional situations — not for control flow like checking if a key exists in a dictionary. For expected failures, use return values or TryXxx patterns.
Q20:

What is a delegate in C#, and why is it an essential part of the language?

Expert

Answer

A delegate is a type-safe reference to a method. It defines a method signature and allows methods to be stored and invoked dynamically.

Delegates enable callbacks, event-driven programming, extensibility, and loose coupling by allowing execution of unknown methods at runtime.

Quick Summary: A delegate is a type-safe reference to a method — it stores a method and lets you call it indirectly. It enables callbacks, events, and passing behavior as a parameter. Delegates are the foundation of events, LINQ, and the entire functional-style programming model in C#.
Q21:

What are multicast delegates, and where are they useful?

Expert

Answer

Multicast delegates can reference multiple methods. When invoked, they call each method sequentially.

They are used for audit logging, notifications, UI event pipelines, and scenarios requiring multiple actions on a single trigger.

Quick Summary: A multicast delegate holds a list of methods and calls them all in sequence when invoked. Use += to add a method and -= to remove it. Events are built on this. Useful for audit logging, notification pipelines, or any scenario where multiple subscribers need to react to one trigger.
Q22:

What is the purpose of built-in delegate types like Action and Func?

Expert

Answer

Action represents methods that return void, while Func represents methods that return a value.

They reduce the need for custom delegate declarations and simplify modern C# programming.

Quick Summary: Action is a built-in delegate for methods that return void. Func is for methods that return a value — the last type parameter is the return type. They eliminate the need to define custom delegate types for most scenarios and work seamlessly with lambdas and LINQ.
Q23:

How do events build on top of delegates in C#?

Expert

Answer

Events wrap delegates to enforce encapsulation. Only the declaring class can invoke the event, while external components may subscribe or unsubscribe.

Events implement the publisher–subscriber pattern for decoupled communication.

Quick Summary: Events wrap a delegate so only the declaring class can invoke it — external code can only subscribe or unsubscribe. This protects the invocation list from being replaced or triggered by outsiders. Events implement the publisher-subscriber pattern for decoupled component communication.
Q24:

Why does C# use the publisher–subscriber model for events?

Expert

Answer

The publisher–subscriber model allows components to react to changes without tight coupling.

Publishers broadcast notifications without knowing subscribers, improving modularity and scalability.

Quick Summary: In the pub-sub model, publishers raise events without knowing who is listening. Subscribers register and react independently. This removes direct dependencies between components — you can add or remove subscribers without changing the publisher. Great for UI events, domain events, and plugin systems.
Q25:

What is the significance of async/await in modern C# programming?

Expert

Answer

async/await enables writing asynchronous code in a synchronous style, avoiding callback complexity.

It improves scalability by freeing threads during I/O waits.

Quick Summary: async/await lets you write non-blocking code that reads like normal sequential code. Without it, async operations required callbacks or manual thread management. With it, the compiler generates a state machine that suspends and resumes execution automatically. Crucial for scalable web APIs and responsive UIs.
Q26:

How does the synchronization context affect async/await behavior?

Expert

Answer

The synchronization context determines where continuation runs after an await.

UI apps return to the UI thread; ASP.NET Core uses a context-free model to improve throughput.

Quick Summary: The synchronization context captures "where" code should continue after an await. In WPF/WinForms, continuations run back on the UI thread. In ASP.NET Core, there is no synchronization context — continuations run on any thread pool thread. Add ConfigureAwait(false) in libraries to avoid capturing the context unnecessarily.
Q27:

Why are thread pools preferred over manually created threads?

Expert

Answer

Thread pools reuse threads, reducing creation overhead and improving performance.

They optimize concurrency through heuristics and balancing strategies.

Quick Summary: Creating a thread is expensive — it allocates a stack, registers with the OS, and takes time. Thread pools reuse a fixed set of threads for many short tasks. They also auto-tune the number of threads based on workload. For async I/O-bound work, the pool is efficient; for long-running CPU work, use dedicated threads.
Q28:

What problems occur when using shared data across multiple threads?

Expert

Answer

Shared data introduces race conditions, stale reads, memory-order issues, and partial updates.

Proper synchronization is required to ensure correctness.

Quick Summary: Shared mutable data across threads leads to race conditions (two threads modify at the same time), stale reads (one thread caches an old value), and partial updates (a write is only halfway done when another thread reads). Always protect shared state with locks, atomic operations, or immutable data.
Q29:

What is the purpose of a mutex or semaphore in multi-threading?

Expert

Answer

Mutex allows exclusive access to a resource; Semaphore limits concurrent access.

Both prevent race conditions but may cause contention when misused.

Quick Summary: Mutex allows only one thread (or process) at a time to access a resource — useful for cross-process locking. Semaphore limits concurrent access to N threads at once — useful for rate-limiting or connection pools. Both block waiting threads; overusing them causes contention and performance issues.
Q30:

What are lock-free operations, and why are they important?

Expert

Answer

Lock-free operations rely on atomic CPU instructions instead of locks.

They improve scalability and reduce latency in high-concurrency environments.

Quick Summary: Lock-free operations use atomic CPU instructions (compare-and-swap) instead of locks — no thread ever blocks waiting. This improves scalability in high-concurrency scenarios where lock contention would be a bottleneck. The Interlocked class in C# provides lock-free atomic operations.
Q31:

What are generics in C#, and why are they critical for type safety and performance?

Expert

Answer

Generics provide compile-time type checking, eliminate unsafe casts, and avoid boxing for value types.

They enhance performance and enable reusable APIs.

Quick Summary: Generics provide compile-time type checking — catch type errors before you ship. They eliminate boxing for value types (huge performance win for collections), remove unnecessary casts, and produce reusable code that works for any type. The backbone of the entire .NET collection library.
Q32:

What are generic constraints, and when should they be used?

Expert

Answer

Generic constraints restrict the types permitted in generics, enabling safer and more expressive generic code.

They allow accessing members safely based on constraints.

Quick Summary: Generic constraints (where T : ...) restrict what types can be used as T. Common constraints: class (reference type), struct (value type), new() (has parameterless constructor), or a specific interface. They let you safely call members on T that the constraint guarantees exist.
Q33:

What is covariance and contravariance in C# generics?

Expert

Answer

Covariance allows using derived types where base types are expected. Contravariance allows the opposite.

They enable flexible delegate and interface usage.

Quick Summary: Covariance (out T on interfaces like IEnumerable) lets you use a more derived type where a base is expected — safe for read-only scenarios. Contravariance (in T on Action) lets you use a more base type — safe for write-only scenarios. Applies to interfaces and delegates, not classes.
Q34:

How does the List data structure work internally in C#?

Expert

Answer

List uses a dynamically resizing array. When full, it grows and copies elements.

Provides fast indexing but costly mid-list insertions.

Quick Summary: List internally uses an array. When it runs out of space, it allocates a new array twice the size and copies everything over. This gives O(1) average Add() and O(1) indexed access, but O(n) insertion or removal in the middle. It is the go-to collection for most scenarios.
Q35:

Why is Dictionary optimized for fast lookups, and how does hashing play a role?

Expert

Answer

Dictionary uses a hash table. The key's hash determines its bucket for near O(1) lookups.

Good hashing minimizes collisions, improving performance.

Quick Summary: Dictionary uses a hash table — it computes a hash of the key, finds the right bucket, and stores the value there. This gives near O(1) lookups on average. When many keys hash to the same bucket (collisions), performance degrades. A good GetHashCode() implementation keeps collisions minimal.
Q36:

What causes collisions in hash-based collections, and how are they handled?

Expert

Answer

Collisions occur when two keys map to the same bucket. They are handled via chaining or probing.

Good hash code implementation minimizes collisions.

Quick Summary: Collisions happen when two different keys produce the same hash bucket. .NET's Dictionary handles this with chaining — multiple entries in the same bucket form a linked list. Too many collisions slow lookups from O(1) to O(n). Implement GetHashCode() to spread values evenly across buckets.
Q37:

What is the purpose of the load factor in hash-based collections?

Expert

Answer

The load factor measures how full a hash table is. When exceeded, resizing and rehashing occur to maintain performance.

Quick Summary: The load factor is the ratio of entries to buckets. When it exceeds a threshold (default 0.72 in many implementations), the collection resizes — allocates a larger array and rehashes everything. This keeps lookup performance near O(1). Pre-size your dictionary with an initial capacity if you know the count.
Q38:

Why is performance tuning important when selecting between List, Queue, Stack, LinkedList, and Dictionary?

Expert

Answer

Each data structure is optimized for specific operations. Incorrect choice leads to wasted CPU cycles and memory overhead.

Understanding internal behavior is essential for building high-performance systems.

Quick Summary: List for sequential access with fast indexing. Dictionary for key-based O(1) lookups. Queue/Stack for FIFO/LIFO access patterns. LinkedList for frequent mid-list insertions. Choosing the wrong one — like using List.Contains() instead of HashSet — can turn O(1) into O(n) and tank performance at scale.
Q39:

What happens inside the CLR from compilation to execution? Explain the full pipeline.

Expert

Answer

The CLR pipeline includes:

  • C# code parsed by Roslyn into IL + metadata.
  • IL stored in assemblies (.dll/.exe).
  • CLR loads assemblies via Assembly Loader and verifies IL.
  • JIT compiles IL to native machine code using RyuJIT and tiered compilation.
  • Execution occurs under GC, exception handling, and type-safety rules.
  • Supports JIT, AOT, ReadyToRun, and NativeAOT execution models.
Quick Summary: Your C# code is compiled by Roslyn into IL (Intermediate Language) stored in an assembly. At runtime, the CLR loads the assembly, verifies the IL, then JIT-compiles it to native machine code on first execution. From then on, the native code runs directly. GC, exception handling, and type safety are managed throughout.
Q40:

Explain the .NET Memory Model and how it affects multi-threading.

Expert

Answer

The .NET Memory Model defines visibility and ordering guarantees:

  • Reordering may occur unless prevented by barriers.
  • volatile ensures visibility but not atomicity.
  • lock, Monitor, Interlocked insert full memory fences.
  • Handles cache coherency, tearing, and ABA issues.
  • More relaxed than Java regarding visibility rules.
Quick Summary: The .NET memory model defines what ordering guarantees threads have. Without synchronization, the CPU or compiler may reorder reads and writes. volatile ensures visibility (no caching), lock adds full memory barriers, and Interlocked provides atomic operations. Writing lock-free code correctly requires understanding all three.
Q41:

How does the CLR allocate memory for value types vs reference types internally?

Expert

Answer

Value types are stored inline on the stack or inside objects. Reference types reside on the managed heap.

Boxing allocates new objects with headers and method table pointers. Inline struct fields avoid indirection and improve performance.

Quick Summary: Value types are stored inline — on the stack for locals, or embedded directly in the containing object on the heap. Reference types always go on the managed heap with a header and method table pointer. Boxing creates a heap-allocated wrapper around a value type, which is why it costs a heap allocation.
Q42:

What is the internal layout of a .NET object?

Expert

Answer

A .NET object contains:

  • Object Header: Method Table pointer + Sync Block index.
  • Aligned fields with padding.
  • Method Table with hierarchy, vtable, and GC info.
Quick Summary: Every .NET object on the heap has a header containing a Method Table pointer (links to the type's vtable and metadata) and a Sync Block index (used for lock and GetHashCode). After the header come the object's fields, aligned for the platform. Understanding this explains why small objects still have memory overhead.
Q43:

Explain all GC generations and what makes Gen2 and LOH behave differently.

Expert

Answer

Generations:

  • Gen0: short-lived objects.
  • Gen1: buffer generation.
  • Gen2: long-lived objects; full GC expensive.
  • LOH: >= 85KB, collected only in full GC.
  • POH introduced for pinned objects.
  • GC modes: Server, Workstation, Concurrent.
Quick Summary: Gen0 collects frequently and cheaply — most objects die here. Gen1 is a buffer between Gen0 and Gen2. Gen2 collects rarely but is expensive — it scans the full heap. The Large Object Heap (LOH, objects >= 85KB) is collected only during full Gen2 GCs and is not compacted by default, leading to fragmentation.
Q44:

What is Tiered Compilation in .NET and why is it important?

Expert

Answer

Tier 0 emits fast, minimally optimized code. Tier 1 recompiles hot methods with optimizations.

Tiered PGO improves performance for long-running APIs.

Quick Summary: Tiered compilation runs methods at Tier 0 first with minimal optimization for fast startup. As methods get called repeatedly (hot paths), the JIT recompiles them at Tier 1 with full optimizations. With PGO (Profile-Guided Optimization), it uses actual runtime data to make even smarter optimization decisions.
Q45:

Explain the difference between lock, Monitor, Mutex, Semaphore, SpinLock, and ReaderWriterLockSlim.

Expert

Answer

Differences:

  • lock/Monitor: lightweight, thread-level.
  • Mutex: cross-process synchronization.
  • Semaphore: limits concurrent entries.
  • SpinLock: busy-wait for low contention.
  • ReaderWriterLockSlim: optimized for read-heavy workloads.
Quick Summary: lock and Monitor: lightweight, process-local, same thread only. Mutex: cross-process, heavier, supports named mutexes. Semaphore: limits N concurrent threads. SpinLock: avoids context switch for very short critical sections. ReaderWriterLockSlim: allows many concurrent readers but exclusive write access.
Q46:

How does async/await work internally at compile-time and runtime?

Expert

Answer

Compiler rewrites async methods into state machines stored as structs or classes.

Await posts continuations to SynchronizationContext or thread pool.

Avoid capturing context for performance.

Quick Summary: The compiler transforms an async method into a state machine struct. Each await point becomes a state. When awaited work completes, the continuation is posted to the SynchronizationContext or thread pool. No thread is blocked during the wait — the thread is returned to the pool and picked up again when needed.
Q47:

Explain how string interning works in .NET and when it becomes harmful.

Expert

Answer

Intern pool stores unique string instances for process lifetime.

Benefits: deduplication of literals. Harmful: unbounded memory use if interning user input.

Quick Summary: String interning stores one shared instance of each unique string literal in a process-wide pool. Two equal string literals share the same object in memory. This saves memory for repeated strings. The danger: interning user input or dynamic strings fills the pool permanently, causing memory growth that the GC cannot reclaim.
Q48:

What is the difference between Reflection, Reflection.Emit, Expression Trees, and Source Generators?

Expert

Answer

  • Reflection: slow metadata inspection.
  • Reflection.Emit: runtime IL generation.
  • Expression Trees: build code graphs; compile to delegates.
  • Source Generators: compile-time code generation.
Quick Summary: Reflection: slow runtime inspection, good for general-purpose tools. Reflection.Emit: generates IL at runtime, used in ORMs and proxies. Expression Trees: build code as data structures, compile to delegates, used in LINQ providers. Source Generators: generate code at compile time — zero runtime cost, the modern preferred approach.
Q49:

How does the .NET ThreadPool actually schedule work?

Expert

Answer

Uses global + per-thread queues, work-stealing, and hill-climbing algorithm to adjust worker count.

Long-running work should use TaskCreationOptions.LongRunning.

Quick Summary: The ThreadPool maintains a global queue and per-thread local queues. Threads steal work from each other when idle. A hill-climbing algorithm adjusts the thread count based on throughput measurements. For long-running work (> 0.5s), use TaskCreationOptions.LongRunning to get a dedicated thread instead.
Q50:

Explain the different kinds of delegates and how they are represented internally.

Expert

Answer

Delegates can be:

  • Open/closed static
  • Open/closed instance
  • Multicast (invocation list)

Captured variables create hidden closure classes.

Quick Summary: A delegate internally stores a method pointer and a target object. Static method delegates have no target. When you capture variables in a lambda, the compiler generates a hidden closure class to hold them. Multicast delegates maintain an invocation list array — invoking iterates and calls each entry.
Q51:

What is the role of Roslyn in compilation and diagnostics?

Expert

Answer

Roslyn provides AST, syntax trees, semantic models, analyzers, and source generation. It is a compiler-as-a-service.

Quick Summary: Roslyn is the C# and VB compiler exposed as an API (compiler-as-a-service). It provides syntax trees, semantic models, and diagnostics that tools like Visual Studio, analyzers, and code fixers use. Source generators hook into Roslyn to generate code at compile time based on your existing code.
Q52:

Explain covariance and contravariance in delegates and generics in detail.

Expert

Answer

out = covariance, in = contravariance.

Allows safe substitutability. Includes delegate variance, IEnumerable variance, and array pitfalls.

Quick Summary: Covariance (out T) on interfaces like IEnumerable means IEnumerable is assignable to IEnumerable. Contravariance (in T) on delegates like Action means Action is assignable to Action. These only apply to interfaces and delegates — not concrete generic classes.
Q53:

What are AppDomains and why are they removed from .NET Core?

Expert

Answer

AppDomains provided isolation but were heavy and difficult to unload.

.NET Core replaced them with AssemblyLoadContext.

Quick Summary: AppDomains in .NET Framework let you load and unload assemblies independently within one process. They were removed in .NET Core because they were expensive, rarely used correctly, and did not work cross-platform. AssemblyLoadContext is the lightweight replacement for plugin and isolated assembly loading scenarios.
Q54:

How does IL differ from CIL and MSIL?

Expert

Answer

IL, CIL, and MSIL are the same thing: platform-independent intermediate language.

Quick Summary: IL, CIL (Common Intermediate Language), and MSIL (Microsoft Intermediate Language) all refer to the same thing — the platform-independent bytecode that C# compiles to. CIL is the official ECMA standard name. MSIL is the old Microsoft name. IL is the common shorthand everyone uses.
Q55:

Explain the full exception handling flow in CLR.

Expert

Answer

Flow:

  • Throw triggers stack unwinding.
  • Search for handlers.
  • First-chance and second-chance exceptions.
  • finally, fault blocks executed.
  • Managed/unmanaged transitions handled by the runtime.
Quick Summary: When an exception is thrown, the CLR unwinds the call stack frame by frame, searching for a matching catch handler. First-chance exceptions are raised before any handler runs (useful for debuggers). Finally and fault blocks always run during unwinding. Unhandled exceptions terminate the process.
Q56:

What is the difference between strong naming, signing, and assembly identity?

Expert

Answer

Strong naming gives a unique identity but not security.

Authenticode signing verifies publisher identity.

Assembly identity affects binding and version resolution.

Quick Summary: Strong naming gives an assembly a unique identity via a public/private key pair — prevents two assemblies with the same name from conflicting. It does not verify the publisher's identity. Authenticode signing (via a certificate) proves who published the assembly. Both can coexist on the same assembly.
Q57:

What is metadata in assemblies and how does CLR use it?

Expert

Answer

Metadata includes type, method, field tables, attributes, and generic constraints.

CLR uses it for JIT compilation, verification, reflection, and type loading.

Quick Summary: Assembly metadata contains tables describing every type, method, field, parameter, and attribute in the assembly. The CLR reads this during type loading to understand what to JIT-compile, how to lay out objects in memory, and what to expose through reflection. It makes assemblies completely self-describing.
Q58:

What are the key performance traps in C# that senior engineers should avoid?

Expert

Answer

  • Excess allocations
  • Boxing
  • Hidden lambda captures
  • LINQ in hot paths
  • Unnecessary locks
  • Sync-over-async
  • Exceptions for flow control
Quick Summary: Key C# performance traps: excessive allocations in hot paths, boxing value types, unintended lambda closures capturing references, LINQ chains that re-enumerate, lock contention on shared state, sync-over-async (blocking async code with .Result), and using exceptions for normal control flow.

Curated Sets for C#

No curated sets yet. Group questions into collections from the admin panel to feature them here.

Ready to level up? Start Practice