Algorithm Complexity (Big O)
Learn how to analyze algorithm efficiency and understand how code scales with input size.
Big O notation describes how an algorithm's performance scales as the input size grows. It tells us the worst-case number of operations needed.
When we write O(n), we mean that as the input size n grows, the number of operations grows proportionally. Double the input, roughly double the work.
Key Point
Big O measures operations, not actual time. A O(n) algorithm might be slower than O(n²) for small inputs, but will always be faster for large enough inputs.
In the real world, efficiency can make or break your application. Consider these scenarios:
Searching 1 Billion Records
- • O(n) Linear Search: 1,000,000,000 operations
- • O(log n) Binary Search: ~30 operations
That's 33 million times faster!
Mobile App Performance
On a phone with limited CPU, an O(n²) algorithm sorting 10,000 items could freeze your app, while O(n log n) completes instantly.
Job Interviews
Every technical interview asks about Big O. Understanding complexity is essential for getting software engineering jobs.
Constant
Operations take the same time regardless of input size
Examples: Array access by index, Arithmetic operations
Logarithmic
Operations grow slowly as input doubles
Examples: Binary search, Balanced tree operations
Linear
Operations grow proportionally with input size
Examples: Single loop through array, Linear search
Linearithmic
Slightly worse than linear, common in efficient sorting
Examples: Merge Sort, Quick Sort (average)
Quadratic
Operations grow with the square of input size
Examples: Nested loops, Bubble Sort
Exponential
Operations double with each additional input
Examples: Naive Fibonacci, Subset generation
While time complexity measures operations, space complexity measures how much memory an algorithm uses. The same Big O notation applies:
- O(1) space: Uses a fixed amount of memory regardless of input (a few variables)
- O(n) space: Memory grows with input size (like creating a copy of an array)
- O(n²) space: Memory grows quadratically (like a 2D matrix of size n×n)
There's often a trade-off between time and space: you can sometimes make an algorithm faster by using more memory, and vice versa.