In "Practical Ruby Projects," the author includes a couple of chapters involving coin simulations. These simulators are used to explore the possibilities of replacing a certain coin or adding a new coin.
One interesting subproblem of these simulations is that of making change. For example, if we need to give 39 cents change in the United States (where there are 25, 10, 5, and 1 cent pieces), we can give:
=> [25, 10, 1, 1, 1, 1]
What if the coins were 10, 7, and 1 cent pieces though and we wanted to make 14 cents change? We would probably want to do:
=> [7, 7]
This week's Ruby Quiz is to complete a change making function with this skeleton:
end
Your function should always return the optimal change with optimal being the least amount of coins involved. You can assume you have an infinite number of coins to work with.
Quiz Summary
One submitter commented that our solutions were pretty complex. The reason for that is that we all sat around dreaming up pathological edge cases that took a long time to solve without some complex code. The good news is that such cases don't generally come up in the day to day change making of most countries.
Let's begin with a peek at a non-complicated solution from Ilan Berci:
coins.sort.
reverse.
map{|coin| f = amount/coin; amount %= coin; Array.new(f){coin} }.
flatten
end
This approach uses a greedy algorithm. We call it greedy because it always tries to move as close to the end result as possible with each step. In this particular case, that is accomplished by working from the biggest coins to the smallest and always grabbing as many of each type of coin as possible without going over our limit.
This technique is trivial and it works for all change made in the U.S., plus many other countries. It doesn't work for the fictional second test case given in the quiz though, returning [10, 1, 1, 1, 1].
That's where the complexity began to creep in.
If we're going to get right answers for any combination of coins, we will have to search all possible combinations. Doing that for certain sets of coins can take a long time and we would really rather it be fast. To get speed we will have to use something smarter than a brute force algorithm that checks all the combinations.
Before I show the smarter approaches though, it's important to note that you likely wouldn't need to be this clever to make change in any country. The reason is simple: we don't usually give change for large amounts since we tend to just use non-coin funds for that. That keeps the search space small enough that advanced techniques aren't too important. Of course, it's still fun to explore the possibilities.
This problem actually turns out to be famous in computer science. It's called the Knapsack Problem. Once you know that you can apply the techniques often used on that problem, the most popular of which is to use a dynamic programming algorithm. ("Dynamic programming" has a different meaning here than we often use in Ruby circles about code writing code.)
The trick to a dynamic programming algorithm is to remember the smaller pieces of a bigger problem once we've figured them out and reuse them as much as possible without having to repeat the work. You can do that while working your way up to a solution or down from a solution, but the top-down approach, often called memoization, is pretty popular.
Here's some code from Carl Porth that implements simple memoization using a Hash:
coins.sort! { |a, b| b <=> a }
# memoize solutions
optimal_change = Hash.new do |hash, key|
hash[key] = if key < coins.min
[]
elsif coins.include?(key)
[key]
else
coins.
# prune unhelpful coins
reject { |coin| coin > key }.
# prune coins that are factors of larger coins
inject([]) {|mem, var| mem.any? {|c| c%var == 0} ? mem : mem+[var]}.
# recurse
map { |coin| [coin] + hash[key - coin] }.
# prune unhelpful solutions
reject { |change| change.sum != key }.
# pick the smallest, empty if none
min { |a, b| a.size <=> b.size } || []
end
end
optimal_change[amount]
end
First, have a look at the structure of this code. It doesn't really seem to do much from the outside. It sort!()s the coins from biggest to smallest, defines a Hash, and indexes into the Hash to magically come up with the answer. The magic is in the definition of the Hash.
To understand how this works, you just have to think of the main problem in smaller slices. Say we want to make 39 cents of change with U.S. coins. Here's how the problem breaks down:
[25] + make_change(39 - 25)
[25, 10] + make_change(39 - 25 - 10)
[25, 10, 1] + make_change(39 - 25 - 10 - 1)
# ...
Carl's magic Hash does exactly this process. Have a look at this line of code plucked from the middle of the Hash definition:
# recurse
map { |coin| [coin] + hash[key - coin] }.
# ...
That's the exact pattern I showed above.
The memoization part is basically automatic here, thanks to how Ruby's Hash works. When you provide a default block to the constructor like this, it will be called the first time some key is accessed that the Hash doesn't know. That block can assign a value for the key though, as Carl does in this code, and then all future attempts to access the same key are a simple Hash lookup (bypassing all of the work in the block). That makes sure that once we have found some coin combination, we never have to find it again.
What are all the rest of those lines in Carl's solution though? Pruning. That's the other trick for getting speed when searching a big space. Throw out everything you possibly can to make the work as small as possible. Carls throws out coins that are bigger than the current amount remaining, coins that are factors of larger coins we could use, and change combinations that don't add up to what we need. This makes for less work and makes the code go faster.
Paolo Bonzini submitted another dynamic programming approach that was one of the faster solutions we saw. It's a bottom-up approach in contrast to the memoization we just saw. It prunes the space, orders the combinations checked to find smaller counts faster, and avoids building a bunch of coin Arrays as it works (Ruby's GC will slow you down when it kicks in). Eric I. submitted a small enhancement to the code that made it even faster. The downside of all this is that it's a little harder still to follow.
Let's see how that modified version works:
return nil if a < 0
return nil if a != a.floor
parents = Array.new(a + 1)
parents[0] = 0
worklist = [[0, 0]]
while parents[a].nil? && !worklist.empty? do
base, starting_index = worklist.shift
starting_index.upto(list.size - 1) do |index|
coin = list[index]
tot = base + coin
if tot <= a && parents[tot].nil?
parents[tot] = base
worklist << [tot, index]
end
end
end
return nil if parents[a].nil?
result = []
while a > 0 do
parent = parents[a]
result << a - parent
a = parent
end
result.sort!.reverse!
end
The main work here is done in the second chunk of code and to break that down, you need to know two things. First, the parents Array holds values at the index of a given amount for the previously used total. That sounds a lot more complicated than it is. For example, in the make_change(11, [10, 1]) call, parents ends up looking like this:
It may not look like it, but the answer is hidden in there! To find it, you start at the desired total (as an index) parents[11]. That's 10, which is the previous total. So, to find the last coin, we can just subtract them which gives us the 1. Then we move to that previous amount at parents[10]. That's a zero and subtracting again gives us 10, the other coin needed. This process I just described is exactly what the third chunk of code does after a solution has been found in the second chunk.
The second thing you need to understand is how it searches for solutions. Essentially, the worklist Array holds the progress so far. Each bit of work in there is shift()ed off, added to, and appended back on to the end if there's more work to do. This is the classic pattern for unrolling recursion into an iterative solution.
Now each piece of work in worklist is the total we are currently on, plus an index for the last coin added. The total we are currently on represents the work we need to do. We can add coins to that to find new totals. The index for the last coin added just keeps us from repeating work by checking larger coins than we've already covered. (This solution assumes the coins are in order from biggest to smallest.) That's one example of the pruning done here. The other is the if conditional that only adds work below the desired total.
It's important to think about how using a queue works here. First it will hold a single zero coin total at the front. In processing that, one coin totals will be added onto the back. When we reach those, the code will begin to add two coin totals. This means we are performing a breadth-first search from the smallest coin count solutions to the largest. This ensures that the first right answer we see is the best and saves us any needless work.
My thanks to all who worked so hard to make sure we can quickly calibrate absurd sums of change. I'm sure we are collectively eliminating the need for paper money thanks to our fun and games.
Tomorrow we will tackle a current hot topic in the Ruby community...