Common Time Complexities - StudyPulse
Boost Your VCE Scores Today with StudyPulse
8000+ Questions AI Tutor Help
Home Subjects Algorithmics (HESS) Common time complexities

Common Time Complexities

Algorithmics (HESS)
StudyPulse

Common Time Complexities

Algorithmics (HESS)
01 May 2026

Common Algorithm Time Complexities

Overview

Recognising the time complexity of an algorithm from its structure is a core VCE Algorithmics skill. The key complexities (from most to least efficient) are:

$$O(1),\; O(\log n),\; O(n),\; O(n \log n),\; O(n^2),\; O(2^n),\; O(n!)$$


$O(1)$ — Constant Time

Characteristics:
- Execution time does not depend on input size
- Performs a fixed number of operations regardless of $n$

Examples:
- Array element access: A[i]
- Hash table lookup (average case)
- Stack push/pop, Queue enqueue/dequeue
- Arithmetic operations


$O(\log n)$ — Logarithmic Time

Characteristics:
- Input is repeatedly halved (or multiplied) each step
- Very efficient — grows slowly even for enormous inputs

Examples:
- Binary search
- Priority queue insert/remove (heap)
- Each step of Dijkstra’s main loop

Key feature in code: A loop variable is multiplied or divided by a constant (not incremented by 1).

i  n
while i > 1 do
    i  i / 2
end while

$O(n)$ — Linear Time

Characteristics:
- Process each element once (single pass)
- Optimal for many problems (you must at least read all input)

Examples:
- Linear search
- Finding maximum/minimum in unsorted array
- BFS and DFS: $O(V + E)$ (linear in graph size)
- Bellman-Ford’s edge relaxation pass: $O(E)$ per iteration


$O(n \log n)$ — Log-Linear Time

Characteristics:
- Often arises from divide-and-conquer with linear merge step
- The “gold standard” for comparison-based sorting

Examples:
- Mergesort
- Quicksort (average case)
- Heapsort
- Prim’s and Dijkstra’s with a heap: $O((V+E)\log V)$

Recognition: A loop running $n$ times, each iteration doing $O(\log n)$ work.


$O(n^2)$ — Quadratic Time

Characteristics:
- Two nested loops, each running $n$ times
- Acceptable for small inputs ($n \leq 10^4$); too slow for large inputs

Examples:
- Bubble sort, insertion sort, selection sort
- Floyd-Warshall: $O(n^3)$ (three loops)
- Prim’s with an adjacency matrix and no heap: $O(V^2)$
- Checking all pairs: for i in range(n): for j in range(n): ...


$O(2^n)$ — Exponential Time

Characteristics:
- Doubles with each unit increase in $n$
- Only feasible for very small $n$ (≤ 25 or so)

Examples:
- Brute-force subset enumeration (all subsets of $n$ items)
- Naive recursive Fibonacci (though dynamic programming fixes this)
- Brute-force 0-1 Knapsack

Origin: Often arises from binary decisions at each step (include or exclude).


$O(n!)$ — Factorial Time

Characteristics:
- Grows impossibly fast; only feasible for $n \leq 12$
- Arises from enumerating all permutations

Examples:
- Brute-force Travelling Salesman Problem (try all routes)
- Brute-force solving an $n \times n$ assignment problem


Growth Rate Comparison

$n$ $O(\log n)$ $O(n)$ $O(n \log n)$ $O(n^2)$ $O(2^n)$ $O(n!)$
10 3 10 33 100 1024 3.6M
20 4 20 86 400 1M huge
100 7 100 665 10,000 $10^{30}$ impossible

EXAM TIP: Be able to identify the time complexity of an algorithm from its code/pseudocode structure AND name an algorithm (or describe a problem) for each complexity class. Recognition questions are very common in VCAA exams.

VCAA FOCUS: For each complexity class, know at least one concrete algorithm. Mergesort → $O(n \log n)$; BFS/DFS → $O(V+E)$; binary search → $O(\log n)$; Floyd-Warshall → $O(n^3)$.

Table of Contents