The Greedy Method

The Greedy Method is the most straightforward design technique. In each step we choose the most beneficial option in every step without looking into the future. The choice depends only on current profit.

Applicability

It can be applied to a wide variety of problems. Most of these problems have n inputs and require us to obtain a subset that satisfies some constraints. Any subset that satisfies these constraints is called a feasible solution.

Find a Feasible Solution

We are required to find a feasible solution that either maximizes or minimizes a given objective function. A feasible solution that does this is called an optimal solution. There is usually an obvious way to determine a feasible solution, but not necessarily an optimal solution.

Selection Procedure

The greedy method suggests that one can devise an algorithm which works in stages, considering one input at a time. At each stage, a decision is made regarding whether or not a particular input is in an optimal solution. This is done by considering the inputs in an order determined by some selection procedure.

If the inclusion of the next input into the partially constructed optimal solution will result in an infeasible solution, then this input is not added to the partial solution. The selection procedure itself is based on some optimization measure.

Objective Function

This measure may not be the objective function. In fact, several different optimization measures may be plausible for a given problem. Most of these, however, will result in algorithms that generate suboptimal solutions.

TIP

The Greedy Method is usually a good approach when each profit can be picked up in every step, so no choice blocks another one.

Comparison with Dynamic Programming

As in the case of dynamic programming algorithms, greedy algorithms are usually designed to solve optimization problems in which a quantity is to be minimized or maximized.

However, unlike dynamic programming algorithms, greedy algorithms typically consist of an iterative procedure that tries to find a local optimal solution. In some instances, these local optimal solutions translate to global optimal solutions.

In others, they fail to give optimal solutions. A greedy algorithm makes a correct guess on the basis of little calculation without worrying about the future. Thus, it builds a solution step by step. Each step increases the size of the partial solution and is based on local optimization.

The choice made is that which produces the largest immediate gain while maintaining feasibility. Since each step consists of little work based on a small amount of information, the resulting algorithms are typically efficient.

Design of Greedy Algorithm

The hard part in the design of a greedy algorithm is proving that the algorithm does indeed solve the problem it is designed for. This is to be contrasted with recursive algorithms that usually have very simple inductive proofs.

Examples of Greedy Algorithm

Some of the most prominent problems for which the greedy strategy works, i.e., gives an optimal solution:

  • Dijkstra’s algorithm for single-source shortest path problem
  • Minimum cost spanning trees (Prim’s and Kruskal’s algorithms)
  • Huffman codes.
  • Greedy algorithm for the Knapsack problem
  • The coin exchange problem

BOTTOM LINE

Greedy algorithms build up a solution piece by piece, always choosing the next piece that offers the most obvious and immediate benefit. Although such an approach can be disastrous for some computational tasks, there are many for which it is optimal. For example minimum spanning trees.

Optimization Problems

Greedy algorithms are generally used to solve optimization problems. To find the solution that minimizes or maximizes some value (cost/profit/count etc.).

In greedy algorithm, solution is constructed through a sequence of steps. At each step, choice is made which is locally optimal. We always take the next data to be processed depending upon the dataset which we have already processed and then choose the next optimum data to be processed.

Greedy algorithms does not always give optimum solution. For some problems, greedy algorithm gives an optimal solution. For most, they do not, but can be useful for fast approximations.

Characteristics of Greedy Strategy

Greedy is a strategy that works well on optimization problems with the following characteristics:

  1. Greedy choice: A global optimum can be arrived at by selecting a local optimum.
  2. Optimal substructure: An optimal solution to the problem is made from optimal solutions of subproblems.

Greedy Choice Property

The globally optimal solution can be obtained by making a locally optimal solution (Greedy). The choice may depend on earlier choices but not on the future. It iteratively makes one Greedy choice after another and reduces the given problem to a smaller one.

Optimal Substructure

A problem exhibits optimal substructure if an optimal solution to the problem contains optimal solutions to the subproblems. That means we can solve subproblems and build up the solutions to solve larger problems.

Advantages and Disadvantages

The main advantage of the Greedy method is that it is straightforward, easy to understand and code. Once we make a decision, we do not have to spend timd re-examining the already computed values. Its main disadvantage is that for many problems there is no greedy algorithm. This means, in many cases there is no guarantee that making locally optimal improvements in a locally optimal solution gives the optimal global solution.

Greedy Method Program Template

We can describe the greedy method abstractly, but more precisely than above by considering the following program template:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
  # The parameter input array contains the n inputs
def greedy(input, n)
  # Initialize the solution to empty
  solution = {}
  for i in 1..n do
    x = select(input)
    if feasible(solution, x)
      solution = union(solution, x)
    end
  end
end

The function select() selects an input from the input array, removes it and assigns its value to x. The feasible() returns true if x can be included into the solution vector. The union() combines x with solution and updates the objective function. The procedure greedy() describes the essential way that a greedy based algorithm will look, once a particular problem is chosen and the procedures select(), feasible() and union() are properly implemented.

TIP

Problems such as knapsack and job sequencing can be solved by using the greedy method program template.

Coin Change using US currency

Elaborate with detailed explanation of the following:

Many algorithms are iterative procedures that choose among a number of alternatives at each iteration. For example, a cashier can view the Change Problem as a series of decisions he has to make: which coin (among d denominations) to return first, which to return second, and so on. Some of these alternatives may lead to correct solutions.

Greedy algorithms choose the “most attractive” alternative at each iteration, for example, the largest denomination possible. In the case of the US denominations, the order is quarters, dimes, nickels and finally pennies to make change.

This greedy strategy can produce incorrect results when certain new denominations are included.

Input: n - a positive integer. Output: minimum number of quarters, dimes, nickels, and pennies to make change for n.

Assumption: We assume that we have an infinite supply of coins of each denomination: {1, 5, 10, 25}

Consider the greedy strategy: To make change for n find a coin of maximum possible value ≤ n, include it in your solution, continue recursively to solve the subproblem of making change for n minus the value of the coin selected.

We repeatedly choose a coin ≤ to the current amount, resulting in a new amount. In the greedy algorithm, we always choose the largest coin value possible without exceeding the total amount.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
def change(n)
  # constant
  C = {1, 5, 10, 25} 
  # set that will hold the solution set
  S = {} 
  value = n
  
  while value != 0
    x = largest item in set C such that x < value
    if no such item 
      return "No Solution"
    end
    S = S + x
    v = v - x
  end
  return S
end

We can apply the Greedy Program Template to the outline of the solution.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
def change(n)
  # coin denominations (constant)
  coins = {1, 5, 10, 25} 
  # Initialize the solution to empty
  solution = {}
  value = n
  
  while value != 0
    x = select(n)
    if !feasible(x, v) 
      return "No Solution"
    end
    solution = solution + x
    v = v - x
  end
  return solution  
end

def feasible(x, value)
  # find largest item in set coins 
  # {1, 5, 10, 25} such that x < value
end

The given code appears to be an implementation of a greedy algorithm to find the minimum number of coins needed to make change, given a set of coin denominations.

Language Agnostic Coding Drills

Here’s the list of units of learning in increasing level of difficulty:

  1. Understanding Variables: Learn about variables and their assignment.

  2. Sets: Learn about sets, how to declare them and how to add elements to a set.

  3. Conditionals: Learn about if-else conditional statements.

  4. Loops: Learn about while loops, their syntax and usage.

  5. Finding the Largest Element in a Set Less than a Given Value: This unit involves searching through a set and comparing elements to a given value.

  6. Greedy Algorithm Concept: Understanding the concept of greedy algorithms, how they work and where they can be applied.

Solution for Coding Drills in Python

Here’s how you might implement these concepts in language-independent pseudo-code:

  1. Understanding Variables

    var x
    var S
    var value
    
  2. Sets

    C = set(1, 5, 10, 25)
    S = set()
    
  3. Conditionals

    if x < value:
        # execute some code
    else:
        # execute some other code
    
  4. Loops

    while condition:
        # execute some code
    
  5. Finding the Largest Element in a Set Less than a Given Value

    x = max([item for item in C if item < value])
    
  6. Greedy Algorithm Concept

    while value != 0:
        x = max([item for item in C if item < value])
        if not x:
            return "No Solution"
        S.add(x)
        value = value - x
    return S
    

Conclusion

Greedy algorithms solve problems by making a sequence of myopic and irrevocable decisions. For many problems, they are easy to devise and often blazingly fast. Most greedy algorithms are not guaranteed to be correct. Examples include scheduling problems, optimal compression, and minimum spanning trees of graphs.

Reference

  • Problems on Algorithms by Ian Parberry

The Greedy Method is a problem-solving approach that makes the most beneficial choice at each stage with the hope that these local choices will lead to a global optimum. It chooses the best option at a certain point without worrying about the effect these choices may have in the future. It does not reconsider the choices taken previously, hence the decisions are irrevocable. This technique shows a certain level of greed, as the name suggests, since the optimal decision is taken at each step with the hope that these local solutions will lead to a global optimum.

Implementation of the Greedy Method

The greedy algorithm follows a simple approach with four functions to make the process easier:

  1. A selection function that chooses the best candidate to be added to the solution.
  2. A feasibility function that is used to determine if a candidate can be used to contribute to a solution.
  3. An objective function, which does not always have to be a part of the solution, but helps to assign a value to the solution or a partial solution.
  4. A solution function that indicates when we have discovered a complete solution.

Coin Change Problem

One common demonstration of a greedy algorithm is the coin change problem, which is the problem of representing a certain amount of money with the least possible number of coins. This problem can be solved using the Greedy Method, and the process looks something like this:

  1. Start by choosing the largest denomination that is less than or equal to the total amount to be represented.
  2. Subtract this coin’s value from the total amount.
  3. Repeat the process until the total amount is zero.

For instance, in the US, the denominations available are 1 cent (penny), 5 cents (nickel), 10 cents (dime), and 25 cents (quarter). If we want to represent 36 cents using the least possible number of coins, we would start with the largest denomination less than or equal to 36, which is a quarter (25 cents). Subtracting this from the total, we have 11 cents remaining. The largest coin less than or equal to 11 cents is a dime (10 cents), so we subtract this, leaving 1 cent. Thus, we can represent 36 cents using just three coins: a quarter, a dime, and a penny.

Language Agnostic Coding Drills

The Coin Change problem is a classic example of a problem solved with dynamic programming. Here are the steps or ‘units of learning’ needed to understand and solve this problem:

  1. Understanding of Variables and Data Types: Grasping the concept of different types of variables such as integers, arrays (or lists), and how to use them.

  2. Control Structures: Mastering the use of conditional statements (if, else) and loops (for, while).

  3. Understanding Arrays: Specifically, the ability to initialize, index, and manipulate arrays. In most languages, arrays (or similar data structures) are used to store interim solutions in dynamic programming problems.

  4. Understanding Recursion: Comprehending how recursion works is essential to understanding dynamic programming.

  5. Understanding Dynamic Programming: Learning how to solve problems by breaking them down into smaller subproblems and building up to the solution. Understanding the concept of “overlapping subproblems” and “optimal substructure” which are the backbone of dynamic programming.

  6. Mastering Problem-Solving Strategies: Learning the skills of problem analysis, how to devise a plan to approach a problem, and how to validate and implement the solution.

Here’s a suggested order for these drills, from simplest to most complex:

  1. Understanding of Variables and Data Types
  2. Control Structures
  3. Understanding Arrays
  4. Understanding Recursion
  5. Understanding Dynamic Programming
  6. Mastering Problem-Solving Strategies

Solution for Coding Drills in Python

Here are the Python coding exercises that correspond to the learning units:

  1. Understanding of Variables and Data Types:

    1
    2
    3
    4
    5
    6
    7
    
    # Declare an integer variable
    num_coins = 5
    print(num_coins)
    
    # Declare an array (list in Python) variable
    coins = [1, 2, 5, 10, 25]
    print(coins)
    
  2. Control Structures:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    
    # Using an if-else control structure
    num = 15
    if num > 10:
        print("Number is greater than 10")
    else:
        print("Number is not greater than 10")
    
    # Using a for loop control structure
    for coin in coins:
        print(coin)
    
  3. Understanding Arrays:

    1
    2
    3
    4
    5
    6
    7
    
    # Initializing an array with a fixed length
    dp = [0] * (num_coins + 1)
    print(dp)
    
    # Accessing and manipulating elements in an array
    dp[0] = 1
    print(dp)
    
  4. Understanding Recursion:

    1
    2
    3
    4
    5
    6
    7
    8
    
    # A simple recursive function to calculate the factorial of a number
    def factorial(n):
        if n == 0 or n == 1:
            return 1
        else:
            return n * factorial(n - 1)
    
    print(factorial(5))  # Output: 120
    
  5. Understanding Dynamic Programming:

    The coin change problem is itself a classic example of dynamic programming. Below is a simple implementation using a top-down approach.

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    
    def coinChange(coins, amount):
        # Initialize dp array
        dp = [float('inf')] * (amount + 1)
        dp[0] = 0
    
        # Fill dp array
        for coin in coins:
            for x in range(coin, amount + 1):
                dp[x] = min(dp[x], dp[x - coin] + 1)
    
        return dp[amount] if dp[amount] != float('inf') else -1
    
    print(coinChange([1, 2, 5], 11))  # Output: 3
    
  6. Mastering Problem-Solving Strategies:

    This is more of a cognitive skill rather than something that can be shown through a coding example. It involves systematically breaking down a problem into subproblems, thinking through potential solutions, and testing these solutions. A good approach to mastering problem-solving strategies is through consistent practice and reviewing solutions to various problems.

Advantages and Disadvantages of Greedy Method

The main advantage of the greedy method is its simplicity and efficiency. The method is straightforward to understand and easy to code, making it an attractive option for solving problems.

However, one main disadvantage is that the greedy method does not guarantee an optimal solution for all problems. It may provide an optimal solution for some problems, but not for all. Therefore, it’s important to analyze the problem at hand and determine if the greedy method is the right approach.

Another potential disadvantage is that the greedy method can lead to a locally optimal solution that is not globally optimal. This happens when the method makes a choice that is the best in the short term, without considering the implications of this choice on future decisions.

Conclusion

While the Greedy Method has its limitations, it can be a powerful problem-solving approach when used appropriately. It’s particularly well-suited for problems where local optima align with the global optimum. The coin change problem is one classic example where the Greedy Method offers an optimal solution. However, as with any problem-solving strategy, it’s important to analyze the problem at hand and determine if the Greedy Method is the right approach.