Soft limits are computational barriers that arise from the explosive growth of search spaces (combinatorial explosions). Unlike hard limits (like undecidability), soft limits can sometimes be overcome — not with exact algorithms, but with heuristics and randomised search.
KEY TAKEAWAY: Heuristics and randomised search provide practical solutions to computationally hard problems, accepting approximate or good-enough answers instead of guaranteed optimal solutions.
A heuristic is a strategy or rule of thumb that guides search towards better solutions without guaranteeing optimality. Heuristics:
- Do not exhaustively explore the search space
- Are designed to find good solutions quickly
- May not find the global optimum
- Are problem-specific — the heuristic must be appropriate for the problem
Examples:
- For TSP: “Nearest neighbour” — always visit the closest unvisited city
- For graph colouring: “Most constrained variable” — colour the vertex with the most neighbours first
- For 0-1 Knapsack: “Best value/weight ratio first” (greedy heuristic)
Randomised search incorporates random choices into the search process to:
- Escape local optima (points that are locally good but not globally optimal)
- Explore diverse regions of the solution space
- Avoid worst-case behaviour of deterministic algorithms
Examples: Simulated annealing, random restarts, genetic algorithms.
Local search starts with an initial solution and iteratively improves it by making small changes (moves to neighbouring solutions).
Components:
1. Initial solution: Random or greedy construction
2. Neighbourhood: Set of solutions reachable by one change
3. Move: Transition from current to neighbouring solution
4. Stopping criterion: Time limit, iteration count, or no improvement
| Limitation | Explanation |
|---|---|
| No optimality guarantee | The solution found may not be the global optimum |
| Problem-specific | A heuristic good for TSP may not work for knapsack |
| Inconsistent performance | May perform well on average but poorly on specific instances |
| No bounds on quality | Cannot always say how far from optimal the solution is |
| Tuning required | Parameters (e.g., temperature schedule in SA) must be tuned |
| Random variability | Different runs may give different solutions |
A key challenge: local search can get stuck in local optima — solutions that are better than all their neighbours but not globally optimal.
Strategies to escape:
- Random restarts: Start from multiple random initial solutions
- Simulated annealing: Sometimes accept worse solutions (probabilistically)
- Genetic algorithms: Maintain a population of solutions, combine good ones
| Problem | Heuristic Used |
|---|---|
| TSP | Nearest neighbour, 2-opt local search, Lin-Kernighan |
| Graph Colouring | DSATUR greedy, simulated annealing |
| 0-1 Knapsack | Greedy (by value/weight ratio), DP (exact for small $n$) |
| Scheduling | Greedy priority rules, simulated annealing |
EXAM TIP: When asked about heuristics, always include: (1) the heuristic strategy, (2) what makes it a heuristic (no guarantee of optimality), and (3) at least one specific limitation. VCAA exams test whether students understand the limitations, not just the mechanics.