In the last few days, I have been finding Python interview questions that can be asked in 2026 interviews. I opened almost every website I could find, read real interview experiences on Reddit, and listed down the repeating questions.
If you are preparing too, use this Python Interview Questions 2026 list to crack your next interview.
The list starts with the basics, the simple questions that are asked again and again.
If you ever feel lost while learning Python, bookmark this quick guide before continuing:
7 Things Students Should Know Before Learning Python
Python Interview Questions Full Index
Python Core Fundamentals and Syntax
This section focuses on the basic building blocks of Python, covering its characteristics, basic operations, and why we rely on indentation.
1. What is Python, and list some of its key features
Python is a versatile, high-level programming language known for its easy-to-read syntax and broad applications. We appreciate its design because it promotes rapid development and readability. Key features include its simple and readable syntax, its nature as an interpreted language, dynamic typing, its extensive standard libraries and frameworks like NumPy and Django, and its cross-platform compatibility, meaning code written on one operating system usually runs without changes on another.
2. Explain the difference between compiled and interpreted languages
The difference between these two paradigms lies primarily in when the source code is translated into machine code (binary code). Compiled languages, such as C++, translate the entire source code into machine code before execution begins, which generally results in faster execution speed. Interpreted languages like Python, on the other hand, translate the code line by line at runtime using an interpreter. This interpretation process makes Python development easier and facilitates quick debugging, but it can sometimes result in slower execution compared to compiled languages.
3. Is Python a compiled language or an interpreted language
Python is generally categorized as an interpreted language. It is important to note that Python code is first compiled into intermediate bytecode (often stored in .pyc files). This bytecode is then executed dynamically by the Python Virtual Machine (PVM) during runtime. This process of executing the bytecode step by step is what technically classifies Python’s execution model as interpreted.
4. Is Indentation Required in Python
Yes, proper indentation is mandatory and absolutely essential in Python. Unlike many other programming languages that utilize brackets or explicit keywords to delineate code blocks, Python uses whitespace indentation (we generally use four spaces) to define the structure and scope of blocks of code, such as those within loops, functions, classes, and conditional statements. Incorrect indentation results in an IndentationError and prevents the program from running.
5. What is the difference between / and // in Python
The single slash operator / performs standard division, always resulting in a floating-point number, even if the division result is a whole number. For example, 10 / 2 yields 5.0. The double slash operator // performs floor division. This operation returns the largest integer that is less than or equal to the mathematical result of the division, effectively rounding the number down to the nearest whole number.
6. What is a variable in programming
A variable is essentially a symbolic name or container used within a program to store data. This data can be modified or changed during the program’s execution. In Python, variables are dynamically typed, which means we do not need to explicitly declare the type of data a variable will hold before assigning a value to it.
7. Explain data types with examples
Data types define the nature of values that variables can store, and they inform the interpreter about which mathematical, relational, or logical operations can be performed on those values. Examples of data types in Python include numeric types (like integers for whole numbers, floats for numbers with decimal points), sequence types (like strings for ordered characters, lists, and tuples), and the mapping type (dictionaries).
8. How do you floor a number in Python
We can floor a number, which means rounding it down to the nearest whole integer, primarily using the built-in floor division operator //. Alternatively, we can use the math.floor() function provided by the math module, which explicitly calculates and returns the largest integer value less than or equal to the input number.
9. How do you define methods inside a class in Python
Methods, which define the behavior of objects created from a class, are defined inside a class using the familiar def keyword, just as we would define a standard function. A critical requirement for any instance method is that its first parameter must always be self. This self parameter is a reference to the specific instance of the object calling the method, allowing the method to access and modify the instance’s unique data or other attributes.
10. How do you create a class in Python
A class is created using the class keyword followed immediately by the chosen class name. The class name conventionally starts with a capital letter to distinguish it from variable names. Within the class definition, we often include a documentation string and typically define the __init__ method, which is used to initialize the object’s attributes upon creation.
11. What is the purpose of the self keyword in Python
The self keyword acts as a reference to the specific instance of the class upon which a method is currently being executed. It is mandatory for self to be the very first parameter listed in any instance method definition. By using self, the method is able to access and manipulate the data (attributes) and call other methods that belong specifically to that individual object instance.
12. How do you create an object of a class in Python
Creating an object, which is also referred to as instantiating an instance of a class, is a straightforward process. We simply call the class name as if it were a function. We pass any necessary initial arguments directly to this call, and these arguments are automatically directed to the class’s __init__ constructor method for setup. For instance, if we have a class named Car, we would create an object with the command my\_car = Car("Red").
Essential Python Data Structures and Collections
This section covers the most common collection types and the differences between mutable and immutable data structures, a core concept for Python interviews. This understanding is crucial for writing efficient and predictable code.
13. What are Python lists and tuples
Lists and tuples are fundamental sequence data structures in Python, and they are both designed to store ordered collections of items. We use both frequently, but their differences dictate when one is chosen over the other. The primary difference between them lies in their ability to be modified: lists are mutable, meaning they can be changed, while tuples are immutable, meaning they cannot be changed after they are created.
14. Detail the differences between a list and a tuple
Lists are defined using square brackets “, are dynamic in size, and support various built-in methods for manipulation such as appending, inserting, and deleting elements. Because of this dynamic nature, lists consume more memory and are generally slower for iteration operations. Conversely, tuples are defined using parentheses (), are static in size once created, consume less memory, and are significantly faster for iteration and access. We choose tuples when data integrity is paramount and speed is needed, since their lack of flexibility ensures the data remains constant.
15. What is the difference between a mutable data type and an immutable data type
Mutable data types are those whose values can be altered, modified, or extended after they have been created, such as lists, dictionaries, and sets. When a mutable object is changed, its identity in memory and its memory address remain the same. Immutable data types, such as integers, strings, and tuples, cannot be changed once they are created. Any operation that appears to modify an immutable object actually results in the creation of a brand new object in a different memory location.
16. Why are tuples often preferred over lists in certain situations
Tuples offer specific advantages that make them the preferred choice when immutability is required. Because they are immutable, they provide assurance that the collection of data will not be accidentally changed later in the program, which is vital for safe programming. Additionally, immutable objects are hashable, making tuples suitable for use as keys within dictionaries or as elements within a set, roles that lists cannot fill. Tuples also consistently offer performance benefits, particularly for faster iteration speeds.
17. What happens when you modify a mutable object inside an immutable container
This question tests a crucial understanding of referencing. When an immutable container, such as a tuple, holds a mutable object, such as a list, it holds a fixed reference to that list. Modifying the contents of the inner list (for example, appending an element) is absolutely possible and does not violate the tuple’s immutability. This is because the reference stored within the tuple has not changed, only the contents of the object pointed to by that reference have changed.
18. What is a KeyError in Python, and how can you handle it
A KeyError is a common exception that Python raises specifically when we attempt to access a key within a dictionary that does not exist in that collection. We have robust methods for handling this: we can use the dictionary’s built-in .get(key, default\_value) method, which returns a specified default value (or None) instead of crashing the program. Alternatively, we can explicitly anticipate the error using a structured try-except block to gracefully manage the missing key situation.
19. Explain list, dictionary, and tuple comprehension
Comprehension provides a concise, single-line syntax for efficiently creating new sequences or collections based on the values of existing iterables.
- List comprehension: Offers a way to create a new list quickly, enclosed in square brackets “.
- Dictionary comprehension: Allows us to create a new dictionary in one line, defining both the key and value pairs, enclosed in curly brackets
{}. - Tuple comprehension: This specific feature does not exist. Using parentheses for comprehension creates a generator expression, not an actual tuple. If we need a tuple, we must explicitly convert the resulting generator expression using the
tuple()function.
20. How can you concatenate two lists in Python
The simplest and most direct method to join two lists together is by using the addition operator +, which results in the creation of a brand new list containing all elements from both original lists. We also commonly use the .extend() method, which performs an in-place modification on the first list by appending all the elements from the second list directly into it.
21. What are Python’s set and frozenset different
A set is a mutable, unordered collection that stores only unique elements. Because it is mutable, we can add elements, remove elements, and perform set operations, but a regular set cannot be used as a dictionary key or as an element within another set. A frozenset is the immutable counterpart of a set. Since it cannot be modified after creation, it is hashable, making it suitable for use in contexts where immutability is required, such as dictionary keys or nested set elements.
22. What is the difference between shallow copy and deep copy in Python, and when would you use each
A shallow copy creates a new object at the top level, but it populates this new object by inserting references to the objects found within the original structure. Consequently, modifying a nested mutable item in the copy will unintentionally alter the original object. Shallow copies are faster and are typically used when the copied data structure contains only immutable objects or if we only care about the top-level structure. A deep copy creates a new object and recursively copies every object found within the original structure, resulting in a completely independent duplicate with no shared references. We use deep copies when dealing with complex, nested mutable structures and when complete isolation between the copies is necessary, even though this operation is significantly slower.
Object-Oriented Programming (OOP) Pillars
These Python interview questions cover the four foundational principles of OOP and detail how they are practically applied when writing code using classes in Python.
23. What is Object-Oriented Programming (OOP)
Object-Oriented Programming is a widely adopted programming methodology centered around the concept of “objects”. This paradigm organizes data (known as attributes) and the functionality that operates on that data (known as methods or behavior) into combined units called objects, which are defined by classes. This structural approach is highly effective for managing complexity, modeling real-world entities, and promoting greater code reusability.
24. What are the four pillars of OOP
The practice of Object-Oriented Programming rests upon four fundamental principles, often called pillars, which define its methodology: Abstraction, Encapsulation, Inheritance, and Polymorphism. Understanding these four principles is necessary for designing robust and scalable software systems.
25. Explain the concept of classes and objects in Python
A class is essentially a blueprint or template that systematically defines the structure, attributes, and available methods that a certain type of entity can possess. Conversely, an object is a tangible instance of that class. The object is created according to the blueprint defined by the class, and it possesses concrete data values for all the defined attributes.
26. Explain Encapsulation and why it is important in OOP
Encapsulation is the core process of bundling an object’s data and its behaviors together into a single unit, specifically the object defined by the class. This mechanism is vitally important because it includes data hiding, restricting direct, unauthorized access to some of the object’s internal components. This restriction prevents accidental or unintended modification of the object’s data (maintaining data integrity), greatly improving the stability and overall maintainability of the codebase.
27. Explain Inheritance in Python OOP
Inheritance is the capability that allows us to create a new class, known as the child or derived class, that automatically adopts all the attributes and methods originally defined in an existing class, known as the parent or base class. This mechanism is powerful because it allows us to reuse established code and build upon existing functionality without the need to rewrite the basic structural code repeatedly.
28. Explain Polymorphism and how is it implemented in Python
Polymorphism is a Greek term meaning “many forms”. In programming, it describes the concept that the same name can be assigned to different methods or functions that exhibit varied behaviors depending on the input type or the context in which they are used. In Python, polymorphism is typically implemented through method overriding, where a child class redefines or customizes a method that it inherited from its parent class to provide a behavior specific to the child class.
29. What is Abstraction in OOP
Abstraction is the process of managing complexity by focusing on providing only the necessary, high-level details to the user while deliberately hiding away the complex or irrelevant implementation specifics. This approach allows users and other parts of the system to interact with complex underlying code through simplified, clean interfaces, significantly reducing the learning curve and preventing misuse.
30. What is a constructor in Python
In object-oriented terminology, the constructor is a special method with the explicit purpose of preparing and initializing the state of an object when that object is first created. This constructor method is automatically invoked by the interpreter immediately after a new instance of a class is successfully instantiated.
31. What is the __init__ method in Python
The __init__() method is the specific constructor function designated in Python. Its primary purpose is to receive initial arguments and then assign these arguments as starting values to the object’s attributes, ensuring the new instance starts in a known and usable state. It handles all necessary initial setup operations required for the instance.
32. What is the difference between instance variables and class variables
The distinction lies in their scope and storage. Instance variables are variables that are unique to each individual object instance created from the class, and they are defined inside methods using the self keyword. Class variables, by contrast, are variables that are shared among all instances of a class and are defined directly within the main body of the class definition, outside of any specific method. Class variables are ideal for storing data that is common to every object belonging to that class.
Advanced OOP and Dunder Methods (Magic Methods)
Advanced technical Python interview questions often move beyond basic class structure to probe the fundamental mechanisms of object creation and behavior defined by special methods (often called “Dunder” or “Magic” methods).
33. What is the difference between __new__ and __init__
The __new__ method is responsible for the actual creation and instantiation of the object instance within memory. It is the first method to be called when we create an object. The __new__ method must return the newly created instance, which is then passed to __init__. The __init__ method is responsible for initialization, meaning it sets up the attributes and state of the instance that was just created. We rarely override __new__ unless we are implementing complex creation logic, such as ensuring that only one instance of a class exists (a singleton pattern).
34. How does Python implement method resolution order (MRO)
Method Resolution Order, or MRO, is the set of rules that determines the sequence in which Python searches base classes for an attribute or method, particularly when dealing with classes that inherit from multiple parents (multiple inheritance). Python 3 uses the C3 linearisation algorithm, which ensures that the inheritance hierarchy is searched predictably and consistently. We can inspect this order for any class using the special .__mro__ attribute or the help() function.
35. What is the difference between @property and descriptors
The @property decorator offers a straightforward, simplified interface for managing attributes, essentially converting a class method into an attribute access operation (a “getter”). Descriptors, on the other hand, represent a deeper implementation mechanism defined by classes that implement specific methods like __get__, __set__, or __delete__. Descriptors provide much finer, programmatic control over exactly how attribute access, setting, or deletion behaves, whereas @property is just a convenient way to implement the descriptor protocol quickly within a single class.
36. What is the purpose of the __call__ method, and can it be used to implement function-like objects
The __call__ method allows an object instance of a class to be invoked directly using parentheses, just as if that object were a standard function. Yes, by implementing __call__, we create “callable objects” or “functors,” which can maintain internal state (stored in their instance attributes) while still being executed with familiar function-call syntax. This pattern is extremely useful when creating complex, stateful decorators or specific callback objects.
37. What is monkey patching in Python
Monkey patching refers to a dynamic technique that allows developers to modify, replace, or extend modules, classes, or attributes at runtime, even after they have been fully defined and loaded. While this technique offers great flexibility, it is generally considered a practice that should be used with extreme caution. Modifying fundamental components of a program at runtime can introduce unexpected behavior, making the code much more difficult to test, debug, and maintain.
38. How do you implement the Factory design pattern in Python using OOP
The Factory design pattern is an creational pattern that abstracts the object creation process. It allows a system to instantiate objects from different classes without the calling code having to know the precise class name or creation logic. In Python OOP, we implement this pattern by creating a dedicated factory method (which is often defined as a classmethod or sometimes a staticmethod) that encapsulates the logic for selecting and constructing the appropriate specific subclass based only on the input parameters provided.
39. What are attributes in a class
Attributes are the data storage compartments or variables that define the state of a class or a specific object created from that class. They represent the characteristics of the object being modeled. These attributes are broadly categorized into two types: class attributes, which are shared across all instances, and instance attributes, which are unique to each individual object.
40. What happens if you override __eq__ without __hash__
The __eq__ method defines how two objects are compared for equality (if they are equal to each other). If we override __eq__ but do not define the __hash__ method, or if we do not explicitly set __hash__ to None, Python may enforce a design rule. If that object is used in hash-based containers like dictionaries or sets, Python may raise a TypeError because it relies on the principle that if two objects are equal (a == b), they must produce the identical hash value (hash(a) == hash(b)).
41. Explain the use of __del__ and memory management in Python
The __del__ method, sometimes called a finalizer, is a special method intended to be executed when an object’s reference count drops to zero and the object is about to be completely removed by the garbage collector. Its typical use case is for performing cleanup operations, such as manually closing network connections or releasing other critical external resources. We must note that relying heavily on __del__ for cleanup is generally discouraged because the precise timing of Python’s garbage collection is unpredictable (non-deterministic).
42. How does Python implement dynamic dispatch
Dynamic dispatch is the language mechanism that determines which specific implementation of a method should be called at runtime, based on the type of the object receiving the call. Python implements this by searching for the method name first within the object’s own class dictionary and then systematically traversing the Method Resolution Order (MRO) defined by its inheritance structure. This run-time lookup allows Python to support polymorphism and highly dynamic behavior.
Functions, Iterables, and Code Flow Control
This section covers critical concepts related to function execution, generating data sequences efficiently, and maintaining strict control over the program’s flow.
43. What are conditionals and loops
Conditional statements, commonly implemented using if-else blocks, are flow control mechanisms used to execute specific segments of code only when predefined conditions evaluate to true. Loops, such as for loops and while loops, are used to execute a sequence of code repeatedly, continuing until a specific termination condition has been met. Both are fundamental tools for controlling the path of execution in an algorithm.
44. What is recursion
Recursion is an elegant programming technique where a function solves a problem by calling itself one or more times within its own definition. This continues until the problem is simplified down to a fundamental base case. Every well-formed recursive function must include a clear base case to stop the self-calls and prevent an infinite loop, along with a recursive case that ensures progress is made toward the base case.
45. Explain recursion with an example
The most classic and frequently asked example of recursion is the function to calculate the factorial of a number. To calculate $n!$, the function calls itself with $n-1$ repeatedly. The function reaches its required stopping point (the base case) when $n$ is less than 2, at which point it returns 1, allowing the chain of calculations to complete and unwind.
46. What is function currying and how can it be implemented in Python
Currying is a mathematical and computer science technique that takes a function designed to accept multiple arguments simultaneously and transforms it into a series of chained functions, where each function in the chain accepts only a single argument. We can implement currying in Python by using nested function definitions that capture the previous arguments in their closure, or more simply, by using the specialized functools.partial helper function.
47. Explain higher-order functions with an example
A higher-order function is defined as a function that meets one or both of these criteria: it accepts one or more other functions as arguments (treating them as procedural parameters) or it returns another function as its final result. A widely used Python example is the built-in map() function. The map() function takes a transforming function and an iterable data collection as input, then applies that function to every single item in the collection, returning a new iterable result.
48. What are pure functions, and why are they important
A pure function is a process that adheres to two strict rules: its output depends entirely and exclusively on its input parameters, and it produces no “side effects”. Side effects include modifying external state, performing file input/output operations, or altering global variables. Pure functions are highly regarded and important because they are predictable, easier to debug due to consistent output, and inherently safer and simpler to execute in parallel environments.
49. What is the difference between generators and coroutines
Generators are designed primarily for efficiently producing a sequence of values over time using the yield keyword. They excel at lazy evaluation and iteration, yielding one value at a time to conserve memory. Coroutines, on the other hand, are designed for asynchronous and concurrent programming, defined using async def. They use await to pause execution while waiting for asynchronous operations (like I/O) and yield control back to an event loop.
50. What is the purpose of yield from
The yield from expression is used within a generator function to efficiently delegate execution flow to another generator or iterable. It streamlines the logic when dealing with deeply nested or complex generator chains. Instead of writing boilerplate code to manually iterate and yield values from the sub-generator, yield from handles this delegation automatically, resulting in cleaner and more expressive code.
51. How would you create a custom iterator in Python, and what are its use cases
To build a custom iterator, we define a class that must implement two specific methods. First, the __iter__ method, which initializes the iteration and typically returns the iterator object itself. Second, the __next__ method, which contains the logic to compute and return the next value in the sequence. Custom iterators are useful for iterating over potentially infinite sequences or extremely large datasets where loading all items into memory at once is computationally infeasible.
52. How can you introspect a Python function to see its parameters
Introspection is the ability of an object to examine its own properties or the properties of other objects at runtime. To examine the parameters of a Python function, we utilize the powerful inspect standard module. Specifically, using functions such as inspect.signature(func) or inspect.getfullargspec(func) allows us to retrieve comprehensive details about the arguments the function accepts, including any associated type hints or default values.
Modules, Packaging, and Handling Exceptions
This section details how we structure larger programs using files and modules and how we rely on robust error management practices to ensure stability.
53. What is monkey patching in Python
Monkey patching is a run-time technique where attributes, methods, or entire classes or modules are modified dynamically after they have already been loaded into memory. This allows us to replace a function with a customized version without altering the original source code. Although it is flexible, excessive use of monkey patching can easily lead to a chaotic codebase where behavior is unpredictable and exceedingly difficult to trace during debugging.
54. How does Python’s __import__ function work
__import__ is the fundamental, low-level built-in function that sits beneath every standard import statement in Python. It is responsible for the dynamic loading and retrieval of module objects. While developers rarely call this function directly, understanding its existence is important because it represents the core mechanism by which Python processes module dependencies.
55. What is the ast module used for
The ast module, short for Abstract Syntax Tree, is used to parse Python source code into a structured, hierarchical tree format. This structural representation of the code enables programmers to perform static analysis, inspect code for errors, modify the code structure itself (code transformation), or build sophisticated tools like custom code linters or static analyzers without ever needing to execute the code.
56. What is a metaclass
A metaclass is a programming concept often described as “the class of a class”. While a regular class acts as the blueprint for creating instances (objects), a metaclass acts as the blueprint for defining how classes themselves are created and how they behave. Metaclasses allow for extremely advanced customization of the class construction process.
57. When would you use a metaclass in real applications
Metaclasses are reserved for solving complex architectural problems, and their use is prevalent within major frameworks and advanced library implementations. We would use them to achieve goals such as automatically registering all defined subclasses within a framework, rigorously validating the presence or structure of required class attributes, or programmatically injecting common utility methods into classes upon creation. They are instrumental in the implementation of Object-Relational Mappers (ORMs) like Django.
58. How are context managers in Python implemented
Context managers are implemented by defining a class that includes two specific “dunder” (double underscore) methods: __enter__ and __exit__. The __enter__ method is automatically executed upon entry into the with code block and is typically responsible for setting up the necessary resource. The __exit__ method is guaranteed to run when execution leaves the block, regardless of whether an exception occurred, ensuring that resources are properly and reliably released.
59. What are context managers in Python, and how are they used
Context managers are constructs specifically designed to help us manage essential system resources in a structured and reliable way, ensuring that resources are always acquired correctly and cleaned up properly. They are predominantly used with the with statement, which creates a context. Common uses include automatically opening and guaranteeing the closing of files or reliably managing the lifetime of database connections.
60. What is the Python “with” statement designed for
The with statement is designed to simplify resource management and exception handling, making code cleaner and more robust. It interacts directly with the context manager protocol to guarantee that critical cleanup operations, such as closing files or sockets, are executed as required upon exiting the code block, even in the presence of errors. This pattern significantly reduces resource leaks and improves program stability.
61. Why use else in try/except construct in Python
The optional else: block within a try/except structure serves a clear logical purpose: the code inside the else block is executed only if the code within the preceding try block completed its execution entirely without raising any exceptions. This design allows us to place statements that depend on the successful execution of the try block, keeping them logically separate from the code that might cause the initial exception, thereby clearly defining the responsibility of the error handling.
62. What are some pitfalls of using eval()
The eval() function executes a string of code passed to it as a standard Python expression. The most significant pitfall is a severe security risk: if the input string originates from an untrusted external source, an attacker can execute arbitrary and potentially malicious code on our system. Additionally, executing code dynamically at runtime via eval() incurs performance penalties due to the parsing overhead, making it slower and generally unsuitable for performance-critical operations.
Memory Management and Advanced Language Internals
This section shows how Python manages memory, addresses concurrency limitations, and details deep-level performance optimisation tools. They make our Python interview questions 2026 list more strong.
63. What is the Global Interpreter Lock (GIL) in Python, and why is it important
The Global Interpreter Lock (GIL) is a mechanism implemented within CPython (the standard Python interpreter) that acts as a mutex, ensuring that only a single native thread can execute Python bytecode at any given moment. The existence of the GIL is important primarily because it simplifies Python’s internal memory management processes and effectively makes internal data structures thread-safe without needing complex locking mechanisms everywhere.
64. How does the GIL affect multi-threaded programs
The GIL fundamentally restricts true, hardware-level parallelism in programs that rely on standard multi-threading for CPU-bound tasks (tasks that spend most of their time performing calculations). Because only one thread can execute Python code at a time, adding more threads does not speed up the program’s calculation capabilities. The GIL is, however, often released during I/O operations (like waiting for a disk read or network response), which means multithreading remains effective for I/O-bound tasks.
65. How does Python handle memory management, and what role does garbage collection play
Python manages memory automatically by allocating and deallocating memory within a private memory heap. The Python memory manager oversees this process, working to optimize memory usage efficiently. The garbage collector (GC) plays a crucial role by periodically dealing with objects that are no longer referenced or accessible, freeing up that memory space. This garbage collection process relies heavily on both reference counting and a separate collector mechanism to break complex circular references.
66. How do memory leaks occur in Python
In Python, memory leaks primarily occur when objects that are no longer logically needed by the program remain reachable because they are still being referenced somewhere. The most common mechanism for true leaks involves circular references between mutable objects that the simpler reference counting mechanism cannot resolve automatically. Additionally, external references from the global scope or potential bugs originating within C extensions (which Python’s garbage collector cannot directly manage) can also cause leaks.
67. What is the purpose of the __slots__ declaration
The __slots__ declaration is a special class attribute used to explicitly list and define the exact attributes an instance of that class is permitted to have. Its key purpose is memory optimization. By using __slots__, Python bypasses the standard mechanism of creating a per-instance __dict__ dictionary to store attributes, thus significantly reducing the memory footprint for classes where many instances are created.
68. What are the key differences between CPython and PyPy
CPython is the original, reference implementation of the Python interpreter, written mostly in C. PyPy is an alternative implementation, often written in RPython, which notably includes a Just-In-Time (JIT) compiler. The JIT compiler in PyPy translates Python code into native machine code while the program is running, often delivering dramatically better performance and speed compared to CPython, especially for long-running, CPU-intensive applications.
69. Explain how the garbage collector works in Python
Python’s garbage collection is a dual-mechanism process. The primary mechanism is Reference Counting, which maintains a count of how many times an object is referenced, deleting it immediately when the count reaches zero. The secondary mechanism is the Cyclic Garbage Collector, which periodically scans for groups of objects that hold mutual (circular) references to each other, thereby preventing their reference counts from ever reaching zero. This specialized collector breaks the cycles and ensures those unreachable objects are eventually freed.
70. What is the significance of __slots__ in high-performance applications
In high-performance applications that must scale efficiently, especially those involving numerical processing or handling numerous concurrent transactions, __slots__ provides critical optimization. The significance is its ability to reduce memory consumption by preventing the allocation of the standard dictionary structure for every single instance. This reduction in memory overhead makes the application run more efficiently and potentially faster by utilizing less system memory.
71. Explain the Python data model and its impact on customization
The Python data model is a specification that formally describes how objects are implemented and how they interact with fundamental language constructs, such as operators, iteration, and attribute access. This model is powered by the special “dunder” methods. Understanding it is crucial because it gives us the power to customize how our objects behave, for example, enabling our classes to support the + operator, the for loop iteration protocol, or the resource management of context managers.
72. What is dependency injection, and how would you implement it in Python
Dependency injection is a software design pattern where an object or component receives the other objects (its dependencies) it needs from an external source, rather than creating or hard-coding them internally. In Python, we implement this simply by defining the required dependencies as arguments to the class’s constructor (__init__) or passing them into a specific method. This implementation style promotes modular code, makes classes loosely coupled, and dramatically simplifies unit testing.
73. What is the difference between abc.ABC and Protocol from typing
abc.ABC defines an Abstract Base Class, which enforces nominal subtyping. A class using abc.ABC must explicitly inherit from it and implement all its abstract methods, forcing a formal structure. In contrast, typing.Protocol is used for static type checking and implements structural subtyping. This adheres to Python’s “duck typing” philosophy, meaning if an object structurally matches the expected methods and attributes defined in the protocol, it is considered compatible, regardless of formal inheritance.
74. What is the difference between weak references and strong references
A strong reference is the default type of reference used when we assign an object to a variable, and the existence of any strong reference prevents the object from being garbage collected. A weak reference, however, permits us to point to an object without increasing its internal reference count. Weak references are essential tools for building complex cache systems or managing resource pools where we need to monitor an object without forcing it to remain alive in memory, thus avoiding memory leaks.
Concurrency, Parallelism, and Asynchronous Programming
This section clearly defines the essential distinctions required for managing multiple tasks simultaneously in Python, addressing the constraints imposed by the GIL.
75. What is the difference between multiprocessing and multithreading in Python
Multithreading involves using multiple threads that operate within a single process and share the same memory space. Due to the GIL, multithreading cannot achieve true simultaneous execution on multiple CPU cores in Python. Multiprocessing involves launching entirely separate processes, with each process operating in its own memory space and running its own independent Python interpreter. This setup bypasses the GIL completely, making multiprocessing the correct strategy for achieving genuine, hardware-level parallelism for CPU-bound tasks.
76. What is a thread, and how does it differ from a process
A process represents an instance of an executing program, such as our web browser or Python script. Processes are “heavyweight,” requiring dedicated system resources, and they are isolated from one another, each having its own private memory space. A thread is a lightweight sequential segment of a process. Threads run concurrently within the parent process and crucially share the same process memory space.
77. How does concurrency differ from parallelism
Concurrency refers to the management of multiple independent tasks, often giving the illusion that they are running simultaneously. This is achieved by quickly switching the CPU focus between tasks (context switching), even if only one processor core is physically performing operations at a time. Parallelism, in contrast, means the actual, physical simultaneous execution of multiple tasks at the exact same moment on different hardware units, such as using multiple CPU cores or distributed machines.
78. How do you prevent race conditions in async code
A race condition occurs when the final outcome of a program becomes unpredictable because multiple tasks access and modify shared data, and the final result depends on which task finishes first. In asynchronous code, which uses libraries like asyncio, we prevent these risks by employing synchronization primitives such as asyncio.Lock or asyncio.Semaphore. These tools enforce mutual exclusion, ensuring that only one asynchronous routine can safely access the shared resource at a critical moment.
79. What’s the benefit of immutability in concurrent programs
Immutability means that an object’s state cannot be modified after its creation. This quality offers a massive advantage in concurrent programming because it immediately eliminates an entire class of errors. Since the data can never change, multiple threads or processes can read the data freely without requiring synchronization locks, preventing race conditions, and drastically simplifying the design and verification of thread-safe code.
80. Explain the difference between synchronous and asynchronous programming
Synchronous programming executes instructions in a strict, sequential order. When an operation, such as waiting for a file read, is initiated, the program blocks and halts its execution until that operation is entirely finished. Asynchronous programming is non-blocking; when an I/O operation begins, the function pauses and yields control back to the system, allowing other tasks to be processed while the program waits for the slow operation to complete. This greatly improves responsiveness, especially for applications dealing with heavy I/O wait times.
81. What is the purpose of contextvars in async code
The contextvars module provides context variables, which are a specialized form of storage that is localized to the specific execution context. This is particularly necessary in asynchronous environments where a single operating system thread might handle multiple, interleaved coroutines. contextvars allows each coroutine flow to maintain its own isolated, local state, such as tracking a unique user ID or a transaction ID, without interference from other concurrent routines sharing the thread.
82. How does Python handle tail recursion
Tail recursion is a special case of recursion where the recursive call is the very last instruction executed in the function. Important programming languages often employ an optimization technique called Tail Call Optimization (TCO) to convert this recursion into simple iteration, avoiding potential stack overflow errors. CPython, however, does not implement TCO. This means deeply recursive functions in Python remain vulnerable to hitting the recursion limit, necessitating that complex recursive logic often must be manually rewritten using loops to ensure stability.
83. How can you speed up numerical computations in Python
To gain significant speed improvements in numerical computations, we must utilize tools that bypass standard Python overhead. The optimal strategies include: leveraging NumPy, which uses highly optimized C-backed arrays for vectorized math, using the Numba library for Just-In-Time (JIT) compilation, which translates Python functions into fast native machine code; or using the multiprocessing module for distributing the computation across multiple CPU cores.
84. How can you profile and optimize Python code
We use profiling to systematically measure performance and identify bottlenecks, which are the parts of the code consuming the most resources. We use standard tools like the built-in cProfile for analyzing function execution time or external libraries like line-profiler for tracking time usage line-by-line. Once bottlenecks are identified, optimization is achieved by implementing fundamentally better algorithms, selecting more efficient underlying data structures, or offloading heavy work using concurrency methods like multiprocessing.
Algorithms, Data Structures, and Complexity (DSA)
An expert-level report on Python interview questions must include the concepts of data structures and the essential metrics used to evaluate algorithmic efficiency.
85. What is Big-O notation, and why is it important
Big-O notation is a mathematical tool used to describe the efficiency and overall complexity of algorithms. It measures how the execution time (time complexity) or memory required (space complexity) grows relative to the size of the input data in the worst-case scenario. It is important because it provides a universal, machine-independent metric, allowing us to accurately compare different algorithmic approaches and predict their performance when handling extremely large datasets.
86. Explain common Big-O complexities
We classify algorithm performance into several common tiers. Constant time ($O(1)$) is the fastest, indicating operations like accessing a dictionary key, where the time taken is constant regardless of the input size. Logarithmic time ($O(\log n)$), such as a binary search, is extremely fast because it ignores large chunks of the input. Linear time ($O(n)$) means the time grows directly proportional to the input size. Quadratic time ($O(n^2)$) is much slower, indicating time grows by the square of the input size, often seen with nested loops.
87. What is the difference between an array and a linked list
An array stores elements sequentially in contiguous memory blocks, which allows for immediate, random access to any element in constant time ($O(1)$). But inserting or deleting elements requires shifting subsequent elements, making these operations costly. A linked list stores elements in nodes connected by memory pointers. This flexible arrangement means insertion and deletion operations are efficient ($O(1)$), but accessing a specific element requires traversing the list from the start, resulting in linear time ($O(n)$) access.
88. What is the difference between a stack and a queue
A stack is a collection that operates based on the Last In, First Out (LIFO) principle. This means the element most recently added to the stack is always the first element that will be removed. We often visualize a stack like a pile of plates. A queue operates based on the First In, First Out (FIFO) principle. The element that has been in the queue the longest is the next element removed, similar to how a line of people waiting for service operates.
89. Explain how hash tables work
Hash tables (implemented as the dict type in Python) store key-value pairs efficiently. The system uses a hash function to transform the input key into a specific index or memory location (a bucket) where the corresponding value will be stored. This design allows for incredibly rapid data retrieval; on average, the time required for insertion, deletion, and searching approaches constant time complexity ($O(1)$). Collision resolution strategies are needed to manage instances where two different keys yield the same hash value.
90. Explain the difference between depth-first search (DFS) and breadth-first search (BFS)
Both DFS and BFS are distinct algorithms used to traverse graphs or tree structures. Depth-First Search (DFS) aims to explore as far along a single branch as possible before backtracking. It typically uses a stack data structure (implicitly or explicitly) to manage nodes that need to be visited later. Breadth-First Search (BFS) explores all of the neighbor nodes at the current depth level before moving on to nodes at the next depth level. BFS is generally used to find the shortest path in an unweighted graph and uses a queue data structure to manage its search order.
91. Explain the concept of memoization with an example
Memoization is an optimization technique designed to drastically speed up computations by caching the results of expensive function calls. Once a function calculates an output for a given set of inputs, that result is stored. If the function is called later with the exact same inputs, the stored result is returned immediately from the cache, bypassing the expensive recalculation. A prime example is calculating the Fibonacci sequence recursively, where memoization transforms the performance from slow exponential time $O(2^N)$ to fast linear time $O(N)$.
92. How does dynamic programming solve the problem of calculating the Fibonacci sequence
Dynamic programming solves the Fibonacci problem by recognizing two key properties: overlapping subproblems and optimal substructure. Instead of recursively recalculating the same sub-problems repeatedly, dynamic programming uses memoization (storing results in a cache, typically a dictionary) to ensure that the result for any input is only calculated once. By storing these intermediate results, the exponential time complexity of the naive recursive solution is reduced significantly to linear time $O(N)$.
93. What is the difference between imperative and declarative programming
These are two contrasting programming paradigms. Imperative programming focuses strictly on defining how a program operates, requiring the programmer to write explicit, step-by-step control flow instructions to change the program’s state. Declarative programming focuses on defining what the program should accomplish. The programmer specifies the desired logic or outcome, and the system figures out the precise sequence of steps necessary to achieve that result.
94. What are the feature selection methods used to select the right variables in data science
Feature selection involves choosing the optimal subset of input variables necessary for constructing effective predictive models. The three main categories of methods are: Filter Methods, which use statistical metrics like correlation or variance thresholds independently of the machine learning model, Wrapper Methods, which repeatedly train the actual model using subsets of features, assessing performance, making them computationally intensive, and Embedded Methods, which incorporate the selection process directly into the model training itself, typically utilizing regularization techniques like Lasso.
Python Ecosystem, Data Handling, and Optimization
This final section brings together modern Python development practices, covering its role in data science, advanced object construction, and high-level structural concepts.
95. What is the difference between __new__ and __init__ in the context of singletons
When implementing the Singleton pattern, which ensures only one instance of a class ever exists, the distinction between __new__ and __init__ is paramount. We must override __new__ because it controls the actual object creation. Inside __new__, we check if the singleton instance already exists. If it does, we return the existing instance immediately, ensuring no new object is created and preventing __init__ from unnecessarily running multiple times.
96. What is the concept of optimal substructure and overlapping subproblems
These two properties are the defining indicators that a large problem can be efficiently solved using dynamic programming techniques. Overlapping subproblems describes situations where solving the main problem requires repeatedly solving identical smaller sub-problems throughout the computation. Optimal substructure means that the optimal or best solution to the overall problem can be systematically constructed by combining the optimal solutions obtained from its various sub-problems.
97. What is the significance of Python’s Dynamic Typing
Python’s dynamic typing means that the type of a variable is only checked during program execution at runtime. This gives Python code tremendous flexibility and allows for faster development because we do not have to declare types upfront. The significance of this feature lies in its enabling of rapid prototyping and duck typing. The necessary information about the object’s type is carried by the object itself, accessible via the __class__ attribute, allowing the interpreter to resolve method calls dynamically.
98. Which Python libraries are most efficient for data processing
The efficiency depends on the scale of the data. For data analysis and manipulation that fits within the memory of a single machine, the pandas library is the highly optimized industry standard. For raw numerical and scientific computing, NumPy is unparalleled due to its vectorized operations implemented directly in C, which provides massive speedups over pure Python loops. When dealing with “big data” that requires processing across a cluster of machines, specialized tools like PySpark or Dask are required, leveraging distributed computing architectures.
99. How would you handle a dataset missing several values
Handling missing values is a mandatory step in any robust data workflow. We must first evaluate the extent of the missingness. If rows or entire columns are substantially incomplete, we may choose to drop them. More often, we use imputation strategies to estimate the missing data; simple methods include filling missing numerical spots with a constant value, or a much better approach is replacing the missing data with the mean or median value of that specific column to maintain statistical integrity. For maximum accuracy, complex methods like multiple-regression analyses can be used to estimate missing values based on other correlated features.
100. How can you cache function calls without using external libraries
We can efficiently achieve function caching without using any external libraries by implementing a custom memoization decorator. This involves defining a nested function structure where an internal dictionary (the cache) is used to store the results of all previous function calls. When the decorated function is invoked, the outer wrapper first checks if the input arguments are present as a key in the cache dictionary. If a match is found, the wrapper instantly returns the stored result, completely avoiding the potentially long execution time of the original function.
Conclusion
If you go through these questions properly and understand the logic behind each answer, you will already be ahead of most candidates. I prepared using the same approach, focusing on fundamentals instead of memorising everything, and it worked for me.
You can also use this list as a revision guide, not a checklist. Come back to it before interviews, practice the questions out loud, and make sure you are comfortable explaining your thinking. If this Python Interview Questions 2026 guide helps you even a little, it has done its job.



