Algorithm basics – recognize patterns, avoid nested loops and divide and conquer often

Algorithms. Now there’s a huge topic. Where do you start? Good question. There is the theorym, the usual searching and the sorting, the math, the complication, the scary details to be precise about. So you start studying, memorizing, understanding, and the purpose of it all maybe escapes you or gets lost along the way. No need for it to be so. So, what’s the point? Algorithms teach us to think. Algorithm wins us performance, and performance is like currency we can use to trade for other stuff (as Prof. Charles Leiserson explains in his MIT lecture I’ve already blogged about). Even if you don’t work in some algorithm heavy programming, like for Google or something, you can benefit from learning some basic patterns and antipatterns like: quadratic is bad – avoid nested loops, sometimes it’s better to sort first so you can search faster later, etc.

What on Earth am I talking about? As always, in order to reason we need to measure. How do we measure computer program performance in a way that shows us the speed of the software independently of the computer it is running on? We concentrate on the growth speed, not the speed itself. We don’t so much care about the time (or maybe memory space) it takes to execute some piece of code for N size of input, as much as how fast does the performance get reduced depending on the growth of the size of input N. Something is slow when it doesn’t scale well, and might need to scale. So we chose an instruction to follow, count how many times it gets executed, and create a function that calculates that number depending on the size of the input, such as C1*n^2+C2*n+C3 for a nice traditional nested loop. Then we ignore the constants, and try to fit that function and algorithm into a family of algorithms. Algorithms in a family have a shared upper bound that’s a line they will never cross, for a sufficiently large N. That’s Big O notation, simplified. Big O notation gives us a function which is the upper bound of the worst case scenario of our algorithm. There are other notations for other bounds as well. So, mostly we are interested in the worst case, but sometimes it is also important to know the lower bound, that is to know that a certain operation can’t be done with less then a certain amount of work.

As the Wikipedia page explains:

“Suppose an algorithm is being developed to operate on a set of n elements. Its developers are interested in finding a function T(n) that will express how long the algorithm will take to run (in some arbitrary measurement of time) in terms of the number of elements in the input set. The algorithm works by first calling a subroutine to sort the elements in the set and then perform its own operations. The sort has a known time complexity of O(n2), and after the subroutine runs the algorithm must take an additional 55n3 + 2n + 10 time before it terminates. Thus the overall time complexity of the algorithm can be expressed as

T(n)=O(n^2)+55n^3+2n+10.\

This can perhaps be most easily read by replacing O(n2) with “some function that grows asymptotically slower than n2 “. Again, this usage disregards some of the formal meaning of the “=” and “+” symbols, but it does allow one to use the big O notation as a kind of convenient placeholder.”

Let’s demystify the very basics, then. If I get something wrong, please don’t be too harsh with me. I’m only just learning, and putting it in writing for myself, and maybe others. So I finally get to learn it well and can stop having to study it from start over and over again:)

O(N2)

If you have to remember just one thing, make it this one – quadratic is bad. That pretty much means avoid nested loops. I’m serious, quadratic (or worse, cubic, etc.) running times grow like crazy – the same things that takes seconds with an N log N growth rate, takes weeks with n^2.  That can become important if we do some heavy calculations. But even in less extreme conditions, it’s nice to be careful. As I’ve mentioned before in this post and also in a previous post, performance is like currency, if we save up enough of it we can trade it for features, usability and what not. But we have to save it up first. So, if you can avoid doing something for n per m times, do.

As most things, algorithms are about pattern recognition. We learn to look at a piece of code, at an algorithm, and figure out what type of behaviour we can expect from it.

To read an explanation of Big O notation and some info on logarithmic growth times, click “Continue reading”.

O(log N) and O(N log N)

So, quadratic is bad, logarithmic is good. Just look at the graph:) For a big enough value, it grows very slowly. Log N and N * log N usually occur when we use a divide and conquer approach. A typical example is the binary search, which is what we normally use when we open a dictionary, for example. We look at a page and figure out if we need to search the pages before or after. And again. And again. We don’t just look at every page until we find what we’re looking for. Basically, each steps makes the problem smaller by more then just one step. Clever. In order to do that, the collection must already be sorted, but if we search more then once, it’s worth it.

That’s a key concept in algorithms, knowing what is worth what. Choosing trade-offs. That also means deciding what instruction costs us more, and which instruction to keep an eye on.

O(N) and O(1)

After the quadratic and the logarithmic growth times we also have the linear and the constant. The constant growth time doesn’t depend at all on the input, and the linear depends linearly. Those don’t worry us at all.

O(2N)

That exists, too. Clearly not very good performance at all.

Analyzing recursion

In the nest post on the subject of algorithms, I’ll be looking at recursion. How do we look at a function that calls itself again and again and figure out a function that describes its growth rate? Luckily we have some clever mathematical discoveries to help us with that.

BTW

Two blog posts similar to this one:

http://rob-bell.net/2009/06/a-beginners-guide-to-big-o-notation/ (suggested on Twitter by @flerro and @lucacanducci)

http://nerdgerl.wordpress.com/2009/11/18/an-intro-to-big-oh-notation-with-java/ (check out the numbers in the nice table)

Advertisements

About apprenticecoder

My blog is about me learning to program, and trying to narrate it in interesting ways. I love to learn and to learn through creativity. For example I like computers, but even more I like to see what computers can do for people. That's why I find web programming and scripting especially exciting. I was born in Split, Croatia, went to college in Bologna, Italy and now live in Milan. I like reading, especially non-fiction (lately). I'd like to read more poetry. I find architecture inspiring. Museums as well. Some more then others. Interfaces. Lifestyle magazines with interesting points of view. Semantic web. Strolls in nature. The sea.
This entry was posted in Algorithms and tagged , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s