Big-O notation sounds like a scary concept. I think it’s the mathy-word ‘notation’ that makes it so, but it’s actually not that difficult to wrap your mind around.
Defining Big-O Notation
Big-O notation is used to classify algorithms by how they respond (e.g., the time it takes to process) to changes in input size.
In other words, Big-O notation describes how long an algorithm will take to run based on how many inputs (or things, or items in an array, or words to sort, etc.) it’s given. You can compare the Big-O notation of one algorithm to another to determine which one has a faster processing time and would be better to use in a given situation. A fancier explanation is Big-O notation helps you analyze algorithms for efficiency.
Big-O notation helps you determine the processing speed of an algorithm, which is really just a series of steps, that will have A LOT of inputs and therefore probably A LOT of steps. If you know you only need to look through a list of 10 words, you probably won’t be too concerned about Big-O notation. But if you have a dictionary with 1,000,000 words, an unknown amount, or a seemingly infinite number of words, then Big-O notation matters.
How Big-O Notation Works
Let’s just say we have three different equations which each represent the time it takes to perform the number of steps in three different algorithms. These each iterate through an array with n items.
(1) T(n) = 3n3
(2) T(n) = 4n2 + 12n – 5
(3) T(n) = 24n
If n is a relatively small number then all of the constants in these equations will affect the performance speed. For example if there are only 10 items in our array then…
(1) T(n) = 3000
(2) T(n) = 1595
(3) T(n) = 2400
It’s clear that the second equation has the least number of steps and is therefore the fastest when n = 10.
But what about when n = 100? Let’s see…
(1) T(n) = 3,000,000
(2) T(n) = 41,195
(3) T(n) = 24000
Well, that got out of hand quickly. Now it’s clear that what really matters in these equations are the exponents. Everything else (the coefficients and constants) has a negligible effect on the expression’s value when we think about really big numbers. This is where Big-O notation comes into play. Big-O notation ignores everything that is essentially negligable and captures what remains in order to describe the time complexity of an algorithm. So if we assign Big-O notation to each equation we would get:
(1) T(n) = O(n3)
(2) T(n) = O(n2)
(3) T(n) = O(n)
We say that T(n) = O(n3) or that is has an order of n3 time complexity. What’s important to know is that the higher the exponent of n, the slower the algorithm.
Examples (with pretty graphs)
Let’s put some of these examples on a graph so that you can see what happens as n, or the items in an array, gets bigger. Get ready for a flashback to your days with a good ol’ TI-89.
Take four different examples of Big-O notation:
O(1), O(log2n) (hint: this actually has fewer steps, or runs faster, than the number of inputs it’s given), O(n), and O(n2)
Here’s a graphical representation of these:
Blue: O(1) vs. Red: O(log2n) vs. Purple: O(n) vs. Green: O(n2)
As you can see, the algorithm with an order of n2 time complexity gets slow very quickly when compared to the other options on the graph. Something that runs in logarithmic time or constant time would be much quicker.
Putting it all together
In summary, Big-O notation helps us understand the time complexity of an algorithm and it’s especially useful when we are writing algorithms that will handle a lot of data. But of course it’s always a good idea to use the most efficient algorithm possible. Big-O notation helps us determine which one that is.
Read here for other ways to determine the processing time.