diskodev

DESIGN & DEVELOPMENT AGENCY

Filtering by Tag: Recurrences

Multiplication Algorithm and Recurrences

Multiplication of two numbers is a well defined computational problem. It is such a basic step that we do not give a thought about the algorithms behind it. In this post we shall look at the various implementations of the multiplication algorithm. We will be only looking at the basic algorithms and for the complete set, refer here. This will also be an introductory post to analyzing recurrence relations and to finding the upper bound on the running time of the recurrence algorithm.

Let us try to multiply two numbers X and Y. Let us use the basic algorithm that we all studied in our younger days.

56 X 236

---------------

3402 + 17010 + 113400

---------------

133812

---------------

Here each number in Y is multiplied against each number in X and shifted accordingly. Let us try to frame a pseudo code for the above algorithm.

In the above code block, we assume that there is a function - length(), that returns the length of the number. For example, for the number 567, it returns 3. The running time of the above algorithm is easy to guess. Since each number is multiplied against another number, we need two loops to do the work. Hence the upper bound on the above algorithm is O(n^2)(n-squared).

Let us try to do better than the n-squared algorithm we have seen above. Let us use a recursive algorithm to multiply two numbers. The basic idea of the algorithm is to breakdown the input into smaller sub-problems, solve them and then combine the solutions.

The working of the algorithm is as follows :

In the algorithm above, we divide both the numbers into n/2 size each and then make four recursive call to multiply them (which might make more recursive calls) and get the final result. The base condition here is that when we get a single digit number, we return the product of the numbers.

Now let us come try to come up with the upper bound on the running time of the recurrence algorithm earlier. The recurrence in the algorithm above is of the form,

T(n) = 4T(n/2) + f(n)

Here, there are four sub-problems and the size of each sub-problem is half of the original problem. And f(n) represents the work done outside the recurrences which is just multiplying the single digit numbers and hence O(1). (These are loose bounds and assuming that multiplying single digit numbers is a elementary operation)

The solution of the recurrence can easily be found using the master theorem. But let us analyze it using the recursion tree method to understand it intuitively.

The recursion tree for the above recurrence can be shown as below,

image001_s.png

In the above recurrence, except the leaves, each call makes four other recursive calls. To come up with the upper bound on the running time, we need to calculate the cost at each level and add them up along the height of the recursion tree. To aid us in our calculation, let us assume that n is an power of 2.

The number of sub-problems at each level j is equal to (4 ^ j). The size of sub-problems at level j is equal to n/(2 ^ j). Hence the cost accrued in each level of the reccursion tree is equal to (4 ^ j) * (n/(2 ^ j)) which gives to (2 ^ j) * cn, the cost of one level in the tree. The height of the tree is log n to the base 2. (It is actually log n-1 to the base 2, but we can be sloppy in this and this does not affect the analysis) This is so because the growth of the tree is a logarithmic function with base 2. So let us sum the cost at each level over the entire tree. Hence,

image002.png

Above, we are trying to sum the cost of each level over the entire recursion tree. This gives us the upper bound on the runtime of the algorithm. So continuing,

image003.png

This equation gives geometric series of the form, (The base of the logarithmic function above is 2)

image004.png

Since log 2(to the base 2) is 1, we get

image005.png

So we have used a complex algorithm to multiply two numbers without getting a increase in the runtime. The runtime of the recurrence algorithm is the same as the "naive" algorithm we had seen earlier. Let us see if we can improve upon the recurrence algorithm we have seen. Note that, to use the recursive algorithm we make four recursive multiplications to compute the procedure. Can we reduce this so that we can reduce the recursive calls thereby reducing the running time of the algorithm? There is a clever algorithm called Karatsuba algorithm which exactly does this. The algorithm makes use of a Gauss trick that makes three recursive calls instead of four to multiply two numbers. The working of the algorithm is as follows:

The recurrence of the above algorithm is given as,

T(n) = 3T(n/2) + f(n)

Here, there are three sub-problems and the size of each sub-problem is half of the original problem. And f(n) represents the work done outside the recurrences which is just multiplying the single digit numbers and hence O(1).

The recursion tree for the above recurrence can be shown as below,

image006_s.png

The number of sub-problems at each level j is equal to (3 ^ j). The size of sub-problems at level j is equal to n/(2 ^ j). Hence the cost accrued in each level of the recursion tree is equal to (3 ^ j) * (n/(2 ^ j)) which gives to ((3/2) ^ j) * cn, the cost of one level in the tree. The height of the tree is log n to the base 2. (It is actually log n-1 to the base 2, but we can be sloppy in this and this does not affect the analysis) This is so because the growth of the tree is a logarithmic function with base 2. So let us sum the cost at each level over the entire tree. Hence,

image007.png

Above, we are again trying to sum the cost of each level over the entire recursion tree. So continuing,

image008.png

This equation gives geometric series of the form, (The base of the logarithmic function is 2)

image009.png

The above makes use of logarithmic properties to simplify the equation.

image010.png

We have actually reduced the running time from n-squared to (n ^ 1.5) which is a slower growing function than n-squared as shown in the figure below.

image011_s.png

The code to do the multiplication is given in the below gist

The function getOrderOfNumber() returns the number of digits in a number. The code makes three recursive calls as we had described earlier and returns the result.

Note: We can confirm the upper bound we had calculated using the recursion tree by the easier master theorem. The details of the process are given below.

The first step in using the master theorem is to calculate the number of leaves in the recursion. The leaves are given by,

image012.png

See you in the comments.

If you have read this far, you should follow me on twitter.

Also feel free to follow my projects on github.

Fibonacci Numbers & its Implementations

The Fibonacci sequence is defined by the recurrence,

            F(0) = 0
            F(1) = 1
            F(i) = F(i - 2) + F(i - 1)         [For all i ≥ 2]

The series that yield from the above recurrence is

            0, 1, 1, 2, 3, 5, 8, 13, …

The Fibonacci series appear throughout nature. The sequence in which rabbit multiply, branch patterns in trees, arrangement of a pinecone etc. are some practical examples.

There are various implementations of the Fibonacci series. Lets look at some of them along with their run times.

The above gist contains the recursive implementation of the Fibonacci series. This is by far the worst implementation of the sequence. The running time of the above algorithm is O(2^n) where n is the input to the recurrence F from above definition. This algorithm might suffice when n is small, but when n is large, the time taken by the program grows exponentially.

The above gist is the iterative implementation of the algorithm. The running time of the algorithm is O(n). This is simple and faster than the above algorithm. For most of the time, this algorithm is fast enough to compute the Fibonacci sequence. But if you still want to push the boundary, there are a few techniques available.

Fibonacci sequence is closely related to the golden ratio (ϕ). It is given by the following formula,

image001.png

And the Fibonacci number of i is given by

image002.png

Using the above formula we can calculate the Fibonacci sequence of a given number i. The running time of the above method is O(1). But it is advised that this method is not used in non-mathematical software. This is so because there might be overflows in the floating-point computation and you might not end with the right result.

The fastest way then, in normal circumstances, to find out the Fibonacci sequence of a number i is to use a matrix of a special form and square it recursively. It turns out that

image003.png

The Fibonacci number is present in the upper right corner and in the lower left corner of the result matrix. For example,

image004.png

The initial matrix is of a constant size (2x2) and when you multiply the matrix against itself, you get another 2x2 matrix. Only four numbers are used here and this makes the algorithm simple. The running time for this algorithm is O(lg n) which is very fast and this is the algorithm that is mostly used to calculate F(i).

If you have read this far, you should follow me on twitter.

Also feel free to follow my projects on github.