The Big-Oh, Part 1

30-second version

  • Big-Oh describes how efficient your code is for the data it might operate on
  • It’s written O(n), and the stuff in side the parenthesis will change depending on the algorithm
  • It’s pronounced “Order n”
  • We assume we never know how much data your algorithm will process, so we always refer to the size of the data as ‘n’
  • We don’t care about things that take constant time, like retrieving an item from an array or printing something to screen
  • We always assume the worst-case scenario: whatever arrangement of the data will result in the slowest running of the code
  • A loop that iterates an array looking at each item exactly once is “linear time,” or O(n)

Computer science isn’t really about computers, it’s about computing. It just so happens that computers are really good at computing, so we use them as tools for our computing needs.

The most important concept in computer science isn’t writing code, it’s efficient algorithms. A programmer works to write code to solve a problem. A computer scientist works to create a repeatable method for solving the problem in the most efficient way.

You, of course, want to do both: You want to write code to solve a problem as efficiently as possible.

What is efficiency? Well, it can mean different things to different people, or in different situations. Sometimes it means solving the problem as fast as possible. We call this time efficiency. Sometimes it means solving the problem using as little memory or disc storage as possible. We call this space efficiency. These are the most common efficiency goals, but there can be others. For example, efficiency might mean solving the problem using as little electrical energy as possible, or generating as little heat as possible, or spending as little money as possible.

But in our world today, storage is plentiful and cheap, electricity is readily available (unless you’re writing code to run on a deep-space probe), and we’ve pretty much solved the heat problem. By far, the most common goal is to solve a problem as fast as possible.

Time efficiency is the most common goal of a programmer who’s looking to make his code better.

If you want to make something faster, you have to be able to measure how fast it is.

When NASCAR driver Danica Patrick wants her race car to go faster, her team has to measure its speed.

When Yankees pitcher Aroldis Chapman wants to throw a faster baseball, his coach needs to measure its speed too.

But they have an advantage that you don’t have: A predictable environment. Danica always knows how far she has to drive, whether it’s a single lap time-trial or a 500 mile race. Aroldis throws his baseball exactly 60 feet 6 inches. Your code might execute on a small amount of data or a large amount. It might be on a slow computer or a fast one. You can’t predict your exact time because you don’t know the parameters of the test like Danica or Aroldis do.

So while Danica has a speedometer and Aroldis has a radar gun, you need a more abstract tool. Your tool needs to measure the speed of your code even if it doesn’t know anything about the amount of data it will process or the hardware it will run on.

Fortunately for you, this tool exists, and we call it Big-Oh.

Big-Oh is a notation for expressing the complexity of an algorithm. We usually use it for time complexity, which is the opposite of time efficiency. The higher the time complexity, the lower the time efficiency. In other words, when we use the word complexity in this way, a more complex algorithm is slower than a less complex one.

The Three Big Ideas of Big-Oh

Big Idea #1: There’s always n data

In Big-Oh, we take away the idea of the size of a data set, and just look at the speed at which the data set will be processed. We always say that the data is size n. It might be an array with n items, or a database with n rows, or a text file with n lines.

Sometimes it’s helpful to make up a number when you’re thinking about Big-Oh. You might imagine an array with 100 items in it, just to help you think clearly.

The fundamental idea of Big-Oh is that we can express the time complexity of an algorithm as a formula based on the size of the data, which is n.

For example, suppose you wanted to print out all the values in your array. You would iterate the array, and print out each value:

int[] values = ….;
for (int i=0;i<values.length;i++)
    System.out.println(values[i]);

How long will this take for our array of 100 items? Well, we can take some measurements and find out: After some quick tests on a computer I have here, it looks like fetching a value from the array takes 3 microseconds, and printing it out takes 155 microseconds. The overhead of the loop adds another 2 microseconds. That’s 160 microseconds for every item in the array (OK, I lied. I didn’t test that, I made those numbers up, but you get the idea….).

Since there are 100 items, it will take 100 x 160 = 16000 microseconds, or 16 milliseconds, to print out the items in our array.

What if the array gets bigger? What if there’s 1000 items? We can still figure it out: 1000 x 160 = 160,000 microseconds, or 160 milliseconds.

You’re probably already a step ahead of me by now. We can write a little math to express the time it will take to print out this array in terms of the size of the array, which is n: 160 x n.

“But wait!,” you say… “What if it’s on a different computer!? What if it takes 4 microseconds to fetch a value instead of 3!? Professor Jake, your whole system is a scam!!”

Big Idea #2: Things that take constant time don’t matter

Remember that we said that we need a tool for measuring time complexity regardless of the size of data or the speed of the computer. In Big-Oh, we take the things that take a constant number of microseconds, like fetching a value from an array, printing it out, or the overhead of the loop, and we simply ignore them.

You see, what matters about our code above isn’t that it takes 160 microseconds to print out a value, it’s that the time it takes to print out the array is exactly proportional to the size of the array. An array of 1000 items takes 10 times as long as an array of 100 items.

Can you predict how long an array of 5000 items will take?

If you said “5 times as long as an array of 1000,” or “50 times as long as an array of 100,” then you’re getting the idea! The constant 160 microseconds doesn’t matter. What matters is that it will happen 5000 times.

So, in Big-Oh notation, we would say that the complexity of this algorithm is O(n). The O() part is just our nerdy way of writing “Big-Oh.” The n is the important part: It means this code’s time complexity can be expressed as some constant (that we don’t care about) times n.

We can make a graph of this algorithm’s performance:

Another way to think about Big-Oh is that if we did a bunch of experiments and made that chart for our code, Big-Oh would describe the slope of the line on the chart. In this case, the chart shows a straight line. Because it’s a straight line, we say the time to print the items is linearly proportional to the number of items in the array. Code that is O(n) has a speed that increases linearly as the amount of data increases. Later, we’ll see some other kinds of lines on the graph, and the Big-Oh that’s associated with them.

In a moment, I’m going to give you some code and ask you to figure out what the Big-Oh would be. But first, let’s talk about talking: We write Big-Oh as O(n), but how do we say it? The ‘O’ in O(n) stands for “order”, so we would say “this algorithm’s time complexity is order n.” When talking about Big-Oh, we’ll always use the word “order” when we see the O() part. Go ahead and try it. Say this out loud: “The time complexity is order n.”

Anyway, your turn! Here’s some code: Can you figure out time complexity of the loop?

String s = “hello, world”;
for (int i=s.length()-1;i>=0;i--)
    System.out.print(s.charAt(i));
System.out.println();

Seriously… try it out. Write down an answer before you read on.

Do you have your answer? OK… I’ll trust that you do. It’s the honor system here, after all….

Let’s take a look at the code: It takes a string, and iterates the string, but it does it backwards. It prints out each character in the string. Once it’s done, it prints out a newline.

It looks like this code will print out the string backwards, like this: dlrow ,olleh

But what about its time complexity? What is its Big-Oh?

If we examine the code, we’ll see that there’s some loop overhead, which is constant time. The loop itself will run through the whole string, so it’ll do it n times (where n is the number of characters in the string). Inside the loop, we fetch a character from the string and print it out, which should take a constant time. Once the loop is done, it prints out a newline, which also takes a constant time.

So the time is (loop overhead) x (fetch and print) x n + (print newline).

Since loop overhead, fetching and printing, and printing a newline are all constant time operations, we ignore them, and we’re just left with n. So the complexity here is, again, O(n). The time it takes to print a string in reverse is linearly proportional to the number of characters in the string.

Programmer Pro Tip: EVERY algorithm that iterates an entire array or list from beginning to end exactly once is O(n).

Big Idea #3: Assume the worst

Here’s our final big idea: This code searches an array of integers to see if there’s a 42 in the array:

int[] numbers = …;
for (int i=0;i<numbers.length;i++) {
    if (numbers[i] == 42)
    return true;
}
return false;

At first glance, this looks like another example of an O(n) algorithm, because it has a single loop that iterates an array exactly once.

But there’s a little trick hiding in there: The statement return true; will cause the loop to end early. We know now that we can ignore constant time things like the loop overhead and the time to fetch an element and compare it to 42, and just focus on how many times the loop will execute.

But now we can’t predict that either? What if the very first item in the array is 42? Then the loop will only execute once, and it won’t look at all n items, it will only look at 1. Does that make it O(1)? What if there’s isn’t a 42 in the array at all? Then it will look at all the items before returning false. How are we supposed to know the Big-Oh if we can’t predict how many pieces of data it will look at!? How!?

The answer is simple: Assume the worst.

What is the worst-case (time-wise) scenario for this algorithm? In this case, the thing that would take the longest is if there’s no 42 anywhere in the array, so the program has to look at every item in the array before finally returning false.

For the purposes of Big-Oh analysis, we always assume the worst case scenario for the algorithm, and then use that in our analysis. So in this case, since the worst case leaves us with a loop that iterates the whole array, doing some constant time operations that we’ll ignore, we can safely say that this algorithm is also O(n), because the time it will take to search for the 42 is linearly proportional to the size of the data.

Next time, we’ll look at some more complex algorithms and their associated Big-Oh. We’ll see some things that are really terribly complex, and find some things that are even better than O(n). We’ll even use the word logarithm in a sentence, and you’ll understand what it means!

But first, one final thing: What if I asked you to figure out the time complexity of this code:

System.out.println(42);

Here there’s no loop, there’s no array, there’s just one call to the println function and a single integer piece of data. We do have data, but there’s only one item of data, it’s size can’t change, and there’s only one operation, which is constant time, so we should ignore it.

Technically, we could say this O(n), since it will take n operations (1) to go over the one piece of data, but remember what we said about ignoring constants? In this case, not only is the operation constant-time, but the amount of data is constant too, so we can even ignore that!

In this case, we say the time complexity is O(1), because it’s going to execute in constant time, no matter what the data is (printing a 1, a 42, or a 65,535 will all take the same amount of time).

Homework

What!? Homework!? (I told, you I’m a professor, of course there’s homework!)

Open up the code for a project you worked on recently and find a loop that you think is O(n) time complexity. In any reasonable project, there are probably several. Go identify one.

There’s more where that came from!

Subscribe to get part 2 of my series on Big-Oh and more. I’ll send you awesome smart stuff to take your programming skills to the next level and beyond.




100% Privacy. No spam. Just awesome stuff to make you a better programmer.