Time Complexity/Big O Notation

Scaling Algorithms Instead of Applications

Tim Roberts
JavaScript Scene

--

There comes a time in every developer’s journey where they will be asked about the time complexity of their algorithm. It might be during a white board interview ( which you should never do ) or while talking to other developers about a certain way of doing things. And while premature optimization leads to bike-shedding, knowing and understanding Big O notation can guide the way we as developers think about and shape our applications.

To understand what I mean, first we need to set the rules for how we will grade the time complexity of an algorithm and have a basic definition of what an algorithm is.

When we say ‘algorithm’, all we are meaning is a defined set of steps that our worker will follow in order to produce an outcome. For instance, the algorithm to make coffee at my mentor’s house is as follows:

  1. Get a cup from the cupboard
  2. Get a K-cup pod of the coffee I want
  3. Ensure that the water in the Keurig is at or above the minimum level
  4. Place the cup on the grate
  5. Place the K-cup inside of the Keurig
  6. Press ‘Start’
  7. Wait till the cup is full
  8. Enjoy

While there might be smaller steps like how to get a cup from the cupboard or how to place the K-cup inside of the machine, the algorithm for making a cup of coffee above outlines each step in the processes and anyone following these steps can make a cup of coffee.

What if we had a list of people that we wanted to make cups of coffee for? This is where time complexity comes into the picture.

Suppose we had n people that wanted cups of coffee. Since the steps and time involved to make a cup of coffee will be the same each time, we say that making n cups of coffee will take n time units to complete. So for 5 cups of coffee it will take 5 units of time or in Big O notation, it will take O(5) to make. If we wanted to make 100 cups of coffee it would take O(100). It is common practice, however, to express Big O notation asymptotically or ‘as the input grows to infinity’. In this way, our Coffee Making algorithm is O(n).

Another rule for grading our algorithm in terms of time complexity is that we usually use the worst case. That means when we say something is O(n), we are saying that the longest this algorithm will take is equal to the amount of time it takes to perform the action against n number of elements.

Now that we have our rules and vocabulary defined, let’s see how this all looks in code.

First, let’s look at O(1). This means that regardless of the amount or size of input, it will always take 1 unit of time to perform the algorithm. Examples of algorithms that have are O(1) are accessing items from an array or hash/object:

// Given n sized list/object, it will take 1
// unit of time to return the value at that index
const valueAt = (key, obj) => obj[key];

Since arrays are just index objects, we can access both objects and arrays through this algorithm and they both show that we are just retrieving a value from a place in memory.

Let’s say that instead of knowing where the item is held in the list, we needed to go through the entire list until we found what we needed. Once again, we are looking at worst case here, so let’s assume that the item we need is at the very end of the list or not in the list at all. What would the time complexity be of that algorithm?

// Given n sized unsorted list, it will 
// take n units of time to find a value
const indexOf = (val, list) => {
for (let i = 0; i < list.length; i++) {
if (list[i] === val) {
return i;
}
}
}

We see that this algorithm will take longer because we are looping over something instead of just performing a single operation. Our best case is that the value is at index 0 and we can return early. The worst case we have is if the item isn’t in the array at all or is at the very end of the array, in which case it will take n amount of time to get the answer. Because of that, we say that the above algorithm is O(n) or a linear algorithm. The time it takes grows in proportion to the amount of elements in the list.

Next let’s look at an example of an algorithm that is O(n²), selection sort. This is one of the worst sorts time-complexity wise, even for its best case. Let’s look and see why:

Aside from changing values in place, what makes this algorithm so bad is with every iteration of the list we are having to iterate over the rest of the list from that point on. As our list grows, so does the amount of iterations we have to do at an exponential rate.

Luckily for us, JavaScript has implemented its own native sort method on an array which we can use to sort the list without having to worry too much about time complexity. But how do we keep it sorted? How do we append an item to a sorted list?

We could newList = [...list,1].sort() our way to a solution but that means that for every insert, we are having to iterate over our whole array. We know that once we find where the item goes, we can trust the rest of the array is correct as well.

Iterating each insertion is best case O(n) but I wonder if we can create an algorithm that matches worst case of O(n) but has a better best case.

Here we loop over each item and check if it is smaller than the item we were given. If it is, we push that value to the new array and continue on to the next loop. If it isn’t, we know that the item we were given belongs next, so we set our result to be the previous results, the item, and then the rest of the list we were given. Sometimes, however, the item passed should go to the end of the list. To know that, we set a flag before we break and check it at our return.

The time complexity of the above algorithm, at its worst, is O(n) because if the item is greater than anything else in the list, we have to iterate over every item to figure that out. Best case, however, it’s basically O(1) because we can break from our loop at the first iteration since the item we were passed should be at the front of the list.

But I wonder: is there an even faster way? Just by bailing early, we jumped from best case O(n) to best case O(1). Since there is no way we can beat 1, is there a way we can raise our worst case and not lower our best case too much?

What if instead of having to search over every list item, we could gain some knowledge about where we should look next for each iteration, cutting down the possible iterations. Instead of walking the whole array, since it’s sorted, we can tell our worker: “Hey, based on the answer you get at this iteration, here is a sub-list to look at next time.”

What we are describing is binary search, and it lowers our time complexity tremendously. We basically tell the worker to look in the middle of the list and compare the item we passed and the item at that middle index. Then, based on the result of that, we search in either the upper or lower half of the list during the next iteration of the loop. We do this each step of the way, cutting our list into half and comparing the middle against the item, until our max index and our min index switch. Once they do that, we know that the item we were given needs to go right after the min index.

source

While we had to expand the number of arguments and add more logic, we changed the search portion of our algorithm from O(n) into O(log n) or logarithmic instead of linear. Because it is a sorted list that is given as input, we can decrease the amount of iterations our worker has to take during the search, decreasing the time complexity.

Below we can see different types of time complexity and how they compare to each other as n becomes larger

Credit: http://callmenick.com/

We can see that while linear time complexity, or O(n), is more performant at the beginning than logarithmic time complexity, or O(log n), at the beginning, once the size of our input grows, O(log n) really starts to shine.

But Does It Matter?

Here’s the rub about time complexity: It’s easy to micro-optimize, and in my experience when working with the DOM, it is hardly the bottleneck to fix. If you have a helper package like ramda or lodash, use their insert/sorts/etc. functions. Even if you resort to [...list,item].sort() , the 100 or so iterations you are doing is probably not the bottleneck.

So why is it important to know? Why go through all the trouble of understanding this weird math-y thing if it’s probably not going to be an issue in your code’s performance?

If you want to solve interesting problems, time complexity is just about your only starting point.

--

--

Tim Roberts
JavaScript Scene

dev kid who likes to write in english instead of code