How to Choose the Right Algorithm

How do you choose algorithm from your toolbox suitable for tackling a wide range of computational problems? When putting it into practice, you might find the sheer number of algorithms, data structures, and design paradigms daunting.

When confronted with a new problem, what’s the most effective way to put your tools to work? To give you a starting point, here is the typical recipe you can use when you need to understand a new computational problem. Develop your own recipe based on your personal experience.

  1. Can you avoid solving the problem from scratch? Is it a disguised version, variant, or special case of a problem that you already know how to solve? For example, can it be reduced to sorting, graph search, or a shortest-path computation? If so, use the fastest algorithm sufficient for solving the problem.

  2. Can you simplify the problem by processing the input using a for-free primitive, such as sorting or computing connected components?

  3. If you must design a new algorithm from scratch, get calibrated by identifying the line in the sand drawn by the “obvious” solution (such as exhaustive search). Is the running time of the obvious solution already good enough?

  4. If the obvious solution is inadequate, brainstorm as many natural greedy algorithms as you can and test them on small examples. Most likely, all will fail. But the ways in which they fail will help you better understand the problem.

  5. If there is a natural way to break down the problem into smaller subproblems, how easy would it be to combine their solutions into one for the original problem? If you see how to do it efficiently, proceed with the divide-and-conquer paradigm.

  6. Try dynamic programming. Can you argue that a solution must be built up from solutions to smaller subproblems in one of a small number of ways? Can you formulate a recurrence to quickly solve a subproblem given solutions to a modest number of smaller subproblems?

  7. In the happy case that you devise a good algorithm for the problem, can you make it even better by deploying the right data structures? Look for significant computations that your algorithm performs over and over again (like lookups or minimum computations). Remember the principle of parsimony: Choose the simplest data structure that supports all the operations required by your algorithm.

  8. Can you make your algorithm simpler or faster using randomization? For example, if your algorithm must choose one object among many, what happens when it chooses randomly?

  9. If all preceding steps end in failure, contemplate the unfortunate but common possibility that there is no efficient algorithm for your problem. Can you prove that your problem is computationally intractable by reducing a known NP-hard problem to it?

  10. Iterate over the algorithm design paradigms again, this time looking for opportunities for fast heuristics (especially with greedy algorithms) and better-than-exhaustive-search exact algorithms (especially with dynamic programming).

Strategic Approach to Algorithm Selection for Computational Problems

Choosing the right algorithm for a given problem is an essential aspect of programming and problem-solving. The approach described here presents a systematic way to choose the right algorithm based on the specific needs of the problem. Let’s go through these points in detail:

  1. Avoid solving the problem from scratch: Before trying to devise a new algorithm, it’s good practice to check if your problem can be reduced to a known problem or if it is a variant of a known problem. This can save you time and effort. For instance, if your problem can be reduced to a sorting problem, you can utilize a well-known sorting algorithm like QuickSort or MergeSort.

  2. Simplify the problem: Preprocessing the input can often simplify the problem. This could involve sorting the input, or partitioning it into connected components in a graph, among other things.

  3. Identify the “obvious” solution: This step involves coming up with a naive or brute-force solution to the problem. Even if this solution is not efficient, it provides a starting point and helps you understand the problem better.

  4. Brainstorm greedy algorithms: A greedy algorithm makes the locally optimal choice at each stage with the hope that these local choices will lead to a global optimum. While greedy algorithms may not provide the best solution for all problems, they work well for many and are usually straightforward to implement.

  5. Try divide-and-conquer: If the problem can be broken down into smaller sub-problems, a divide-and-conquer approach could be beneficial. This technique involves dividing the problem into smaller sub-problems, solving them independently, and combining their solutions to solve the original problem.

  6. Use dynamic programming: Dynamic programming (DP) is an optimization technique used to solve complex problems by breaking them down into simpler sub-problems. If your problem has overlapping subproblems and an optimal substructure, DP is an excellent strategy to consider.

  7. Leverage data structures: Data structures can often help optimize an algorithm by reducing time complexity for specific operations. For example, using a hash map can reduce lookup times from O(n) to O(1).

  8. Randomization: Randomization can sometimes help simplify or optimize an algorithm. Randomized algorithms can help to overcome worst-case scenarios or even lead to more straightforward and efficient solutions in some cases.

  9. Consider intractability: If you’ve tried everything and nothing seems to work efficiently, it’s possible that the problem you’re dealing with is NP-hard, meaning there is no known efficient solution. In such a case, you might need to settle for an approximation or heuristic.

  10. Iterate over the paradigms: If you haven’t found a satisfactory solution, go through the previous steps again. Each iteration might give you new insights about the problem, which can lead you to an efficient solution. Remember that sometimes heuristics, or rules of thumb, can provide good enough solutions when optimal solutions are too costly to compute.

This sequence of steps can act as a general guide when you approach a new computational problem. However, every problem is unique, and as you gain experience, you will learn to adapt and come up with your own strategies for choosing the right algorithm.