Runtime Analysis
4.1 Introduction
Understanding how algorithms perform as input sizes grow is fundamental to computer science and software engineering. Time complexity analysis provides a mathematical framework for comparing algorithms and predicting their behavior on large datasets. This chapter introduces asymptotic notation, particularly Big O notation, which has become the standard language for describing algorithmic efficiency.
4.2 What is Time Complexity?
Time complexity measures how the running time of an algorithm scales with the size of its input. Rather than measuring actual execution time (which depends on hardware, programming language, and implementation details), time complexity focuses on the growth rate of operations as input size increases.
4.2.1 Measuring Operations
When analyzing time complexity, we count fundamental operations that contribute to the algorithm’s running time:
Arithmetic operations (+, -, *, /, %)
Comparisons (<, >, ==, !=)
Assignment operations
Array/memory accesses
Function calls
4.2.2 Input Size
The input size, typically denoted as n, represents the amount of data the algorithm processes. For different data structures:
Arrays/Lists: Number of elements
Strings: Number of characters
Graphs: Number of vertices (V) and edges (E)
Trees: Number of nodes
Matrices: Dimensions (rows × columns)
4.3 Asymptotic Notation
Asymptotic notation provides a mathematical way to describe the limiting behavior of functions as their input approaches infinity. The three primary notations are Big O, Big Omega, and Big Theta.
4.3.1 Big O Notation (O)
Big O notation describes the upper bound of an algorithm’s growth rate. It represents the worst-case scenario for how slowly an algorithm can perform.
Formal Definition: A function f(n) is O(g(n)) if there exist positive constants c and n₀ such that:
f(n) ≤ c × g(n) for all n ≥ n₀
Intuitive Understanding: Big O tells us “this algorithm will never be slower than this rate of growth.”
4.3.2 Big Omega Notation (Ω)
Big Omega notation describes the lower bound of an algorithm’s growth rate. It represents the best-case scenario for how quickly an algorithm can perform.
Formal Definition: A function f(n) is Ω(g(n)) if there exist positive constants c and n₀ such that:
f(n) ≥ c × g(n) for all n ≥ n₀
4.3.3 Big Theta Notation (Θ)
Big Theta notation describes the tight bound of an algorithm’s growth rate. It means the algorithm’s performance is bounded both above and below by the same growth rate.
Formal Definition: A function f(n) is Θ(g(n)) if f(n) is both O(g(n)) and Ω(g(n)).
4.3.4 Little o and Little omega
Little o (o): Strict upper bound - f(n) grows strictly slower than g(n)
Little omega (ω): Strict lower bound - f(n) grows strictly faster than g(n)
4.4 Common Time Complexities
Understanding the hierarchy of common time complexities is crucial for algorithm analysis.
4.4.1 Constant Time - O(1)
The algorithm’s performance doesn’t depend on input size. Examples include:
func getFirstElement<T>(_ array: [T]) -> T? {
return array.first // Always one operation
}
func hashTableLookup<K: Hashable, V>(_ dictionary: [K: V], key: K) -> V? {
return dictionary[key] // Constant time on average
}
4.4.2 Logarithmic Time - O(log n)
Performance grows logarithmically with input size. Common in divide-and-conquer algorithms:
func binarySearch<T: Comparable>(_ array: [T], target: T) -> Int? {
var left = 0
var right = array.count - 1
while left <= right {
let mid = (left + right) / 2
if array[mid] == target {
return mid
} else if array[mid] < target {
left = mid + 1
} else {
right = mid - 1
}
}
return nil
}
4.4.3 Linear Time - O(n)
Performance scales directly with input size:
func findMaximum<T: Comparable>(_ array: [T]) -> T? {
guard !array.isEmpty else { return nil }
var maxValue = array[0]
for element in array.dropFirst() { // n-1 operations
if element > maxValue {
maxValue = element
}
}
return maxValue
}
4.4.4 Linearithmic Time - O(n log n)
Common in efficient sorting algorithms:
func mergeSort<T: Comparable>(_ array: [T]) -> [T] {
if array.count <= 1 {
return array
}
let mid = array.count / 2
let left = mergeSort(Array(array[0..<mid])) // T(n/2)
let right = mergeSort(Array(array[mid..<array.count])) // T(n/2)
return merge(left, right) // O(n)
}
func merge<T: Comparable>(_ left: [T], _ right: [T]) -> [T] {
var result: [T] = []
var leftIndex = 0
var rightIndex = 0
while leftIndex < left.count && rightIndex < right.count {
if left[leftIndex] <= right[rightIndex] {
result.append(left[leftIndex])
leftIndex += 1
} else {
result.append(right[rightIndex])
rightIndex += 1
}
}
result.append(contentsOf: left[leftIndex...])
result.append(contentsOf: right[rightIndex...])
return result
}
4.4.5 Quadratic Time - O(n²)
Performance grows with the square of input size:
func bubbleSort<T: Comparable>(_ array: inout [T]) {
let n = array.count
for i in 0..<n { // n iterations
for j in 0..<(n-1) { // n-1 iterations each
if array[j] > array[j+1] {
array.swapAt(j, j+1)
}
}
}
}
4.4.6 Cubic Time - O(n³)
Often seen in triple-nested loops:
func matrixMultiply(_ A: [[Int]], _ B: [[Int]]) -> [[Int]] {
let n = A.count
var C = Array(repeating: Array(repeating: 0, count: n), count: n)
for i in 0..<n { // n iterations
for j in 0..<n { // n iterations each
for k in 0..<n { // n iterations each
C[i][j] += A[i][k] * B[k][j]
}
}
}
return C
}
4.4.7 Exponential Time - O(2ⁿ)
Performance doubles with each additional input:
func fibonacciNaive(_ n: Int) -> Int {
if n <= 1 {
return n
}
return fibonacciNaive(n - 1) + fibonacciNaive(n - 2)
}
4.4.8 Factorial Time - O(n!)
Extremely inefficient, often in brute-force solutions:
func generatePermutations<T>(_ array: [T]) -> [[T]] {
if array.count <= 1 {
return [array]
}
var result: [[T]] = []
for i in 0..<array.count {
var rest = array
let element = rest.remove(at: i)
for perm in generatePermutations(rest) {
result.append([element] + perm)
}
}
return result
}
4.5 Analyzing Algorithm Complexity
4.5.1 Rules for Big O Analysis
Drop constants: O(3n) becomes O(n)
Drop lower-order terms: O(n² + n) becomes O(n²)
Different inputs use different variables: O(a + b) not O(n)
Nested loops multiply: Two nested loops over n elements = O(n²)
4.5.2 Best, Average, and Worst Case
Algorithms can have different time complexities depending on input characteristics:
Best case: Most favorable input arrangement
Average case: Expected performance across all possible inputs
Worst case: Least favorable input arrangement
Example - Quick Sort:
Best case: O(n log n) - pivot divides array evenly
Average case: O(n log n) - random pivot selection
Worst case: O(n²) - pivot is always smallest or largest
4.5.3 Amortized Analysis
Some algorithms have operations that are expensive occasionally but cheap on average. Amortized analysis considers the average cost over a sequence of operations.
Example - Dynamic Array Resizing:
Individual insertions: O(1) amortized
Occasional resize operation: O(n)
Overall sequence: O(1) per insertion on average
4.6 Space Complexity
Space complexity measures how memory usage scales with input size, using the same notations as time complexity.
4.6.1 Types of Space Usage
Input space: Memory to store input data
Auxiliary space: Extra memory used by algorithm
Total space: Input space + auxiliary space
4.6.2 Space-Time Tradeoffs
Often, algorithms can trade space for time or vice versa:
Memoization Example:
func fibonacciMemo(_ n: Int, memo: inout [Int: Int]) -> Int {
if let cached = memo[n] {
return cached
}
if n <= 1 {
return n
}
let result = fibonacciMemo(n - 1, memo: &memo) + fibonacciMemo(n - 2, memo: &memo)
memo[n] = result
return result
}
Time: O(n) instead of O(2ⁿ)
Space: O(n) instead of O(1)
4.7 Practical Considerations
4.7.1 When Big O Matters
Large datasets: Differences become pronounced
Repeated operations: Small improvements compound
System scalability: Growth rate affects system limits
Real-time systems: Predictable performance requirements
4.7.2 When Big O Doesn’t Tell the Whole Story
Small datasets: Constants and lower-order terms matter
Cache effects: Memory access patterns affect real performance
Parallel processing: Big O assumes sequential execution
Hardware considerations: Different operations have different costs
4.7.3 Algorithm Selection Guidelines
Understand your data: Size, distribution, and characteristics
Consider constraints: Time, space, and stability requirements
Profile in practice: Theoretical analysis guides but doesn’t replace testing
Optimize appropriately: Focus on bottlenecks, not premature optimization
4.8 Advanced Topics
4.8.1 Recursive Complexity Analysis
Master Theorem: For recurrences of the form T(n) = aT(n/b) + f(n):
If f(n) = O(n^(log_b(a-ε))) for some ε > 0, then T(n) = Θ(n^log_b(a))
If f(n) = Θ(n^log_b(a)), then T(n) = Θ(n^log_b(a) × log n)
If f(n) = Ω(n^(log_b(a+ε))) for some ε > 0, then T(n) = Θ(f(n))
4.8.2 Probabilistic Analysis
Some algorithms have performance that depends on probability distributions:
Randomized algorithms: Use random choices during execution
Average-case analysis: Expected performance over input distributions
Concentration bounds: Probability that performance deviates from expected
4.8.3 Competitive Analysis
For online algorithms that make decisions without knowing future inputs:
Competitive ratio: Performance compared to optimal offline algorithm
Lower bounds: Fundamental limits on online algorithm performance
4.9 Common Mistakes and Misconceptions
4.9.1 Misunderstanding Big O
Mistake: Thinking O(n) is always better than O(n²)
Reality: For small n, O(n²) might be faster due to constants
4.9.2 Ignoring Real-World Factors
Mistake: Focusing only on asymptotic complexity
Reality: Cache locality, branch prediction, and parallelism matter
4.9.3 Premature Optimization
Mistake: Optimizing before identifying bottlenecks
Reality: Profile first, then optimize the slowest parts
4.10 Summary
Time complexity analysis using Big O notation provides a fundamental framework for understanding algorithm performance. Key takeaways include:
Big O describes upper bounds on growth rates
Common complexities form a hierarchy from O(1) to O(n!)
Analysis involves counting operations and considering worst-case scenarios
Space complexity is equally important as time complexity
Practical considerations often influence algorithm choice beyond theoretical analysis
Understanding these concepts enables informed decisions about algorithm selection and optimization, forming the foundation for efficient software development and system design.
4.11 Exercises
Basic Exercises
Determine the time complexity of the following code:
func mysteryFunction(_ n: Int) -> Int { var count = 0 for i in 0..<n { for j in 0..<i { count += 1 } } return count }
Compare the growth rates: O(n!), O(2ⁿ), O(n³), O(n²), O(n log n), O(n), O(log n), O(1)
Explain why we drop constants and lower-order terms in Big O notation.
Intermediate Exercises
Analyze the time and space complexity of binary search, both iterative and recursive implementations.
Given an algorithm with time complexity T(n) = 2T(n/2) + O(n), use the Master Theorem to determine its complexity.
Design two algorithms to find duplicates in an array: one with O(n²) time and O(1) space, another with O(n) time and O(n) space.
Advanced Exercises
Analyze the amortized complexity of operations on a dynamic array (vector) that doubles in size when full.
Compare the theoretical and practical performance of merge sort vs. quicksort for different input distributions.
Design an algorithm to find the k-th smallest element in an unsorted array. Analyze different approaches and their trade-offs.
This chapter provides a comprehensive foundation for understanding algorithmic complexity. The next chapter will explore specific algorithm design paradigms and their complexity characteristics.