GOTO All What In The Hell Articles

Big O is a notation for describing the worst-case performance of an algorithm or procedure. This post will show real-world examples of big O notation.

## O(1) – Constant Time

A constant time algorithm will always take the same amount of time to run.

**Example:**

Finding a book in a library.

**Why:**

There are many methods to finding a book in a library, but I’m going to stick with the simplest, ask the librarian. Assuming the librarian is familiar with their library, it should take about the same amount of time to find a book about mathematics as it would about poetry.

For another example see indexing an array.

## O(n) – Linear Time

A linear algorithm runs once for each item(n).

**Example:**

Finding a CD in a stack of CDs

**Why:**

The simplest method of finding a CD in a stack of CDs is to just look at the one on top. If that CD is the one you’re looking for, you’ve found it. If it is not, look at the next one. Repeat until there are no CDs. If the CD we’re looking for is the last one in the stack, we had to look at every CD to find it.

But what about all the times that you look for a CD and the CD you’re looking for is in the middle or on the top? Big O notation describes the worst-case running time of your algorithm. It assumes that every search will be for the last CD (or a CD that doesn’t exist). Big O is not a statistical model of how often a given condition will occur in an algorithm!

## O(log n) – Logarithmic Time

A logarithmic algorithm runs log_{2}(n) times.

**Example:**

Let’s play a guessing game. I’m going to pick a number between 0 and 100 and you have to guess it. You have to try to guess the number in as few guesses as possible. Every time you guess I will tell you if my number is higher or lower.

**Why:**

You could devise a number of strategies for playing this game, however one strategy is particularly good at this type of game, binary search. Each time we guess, we guess the number right in the middle. By doing this we halve the number of possibilities every time, let’s play a little game…

The game above plays out the worst case performance of an O(log n) algorithm like binary search. Instead of making 100 guesses at worst, we only make 7!

## O(n^{2}) – Quadratic Time

A quadratic time algorithm runs n^{2} times, or n times for each n.

**Example:**

Check if there are any duplicates in a deck of cards.

**Why:**

The simplest way to check if there are any duplicate cards in a deck is to pick the first card from the deck. Then compare that card to every other card in the deck. If there are no matches take the second card from the deck. Then compare the second card to every other card in the deck. Continue until you have checked all the cards.

By the time we are done we have looked at each card in the deck 52 times for a total of (52 * 52) 2704 comparisons. There are ways to lessen this number, like not looking through cards that you have already compared. However the most important thing to remember with big O notation is that it always describes the worst-case time, not the average.

Here’s what the process looks like if you don’t re-check cards you’ve already compared.

I’ll come back later and add some more big O notations, so check back. For now I’ll leave you with a question to ponder. What is the big O notation of the diagram above?

## Links

**Sorting** (this will be the topic of another WITH article, but for now…)

Visual Representation of Sorting Algorithms in Javascript

Visualizing Sorting Algorithms

What different sorting algorithms sound like

Very good explanation. Especially the examples and diagrams. Thanks a lot for the effort!!

The answer for the last one is n(n+1)/2. Is that right??

That’s a good question! That looks correct to me mathematically, but I’m not sure if big O is meant to be that specific. I’ve never seen an algorithm described as O(n(n+1)/2) before, but I could easily be wrong here.

thanks for this post… very clear explanation, just what I needed. But what about those formulas??? still really confuses me…

Thanks! I’m not sure what formulas you are having trouble with?

formula to compute for the time…

These formulas don’t compute time in the traditional sense. You can’t take big O notation, for instance O(n) or linear time, and say, “I have 1,000 items so O(1,000) will take 3 minutes and 22 seconds.”

Instead it allows you to compare algorithms without doing those calculations. It’s kind-of like an elaborate estimating method. So we can say things like:

An O(log n) algorithm will run faster than a O(n) algorithm for a sufficiently large value of n.

That’s it. It can sometimes be difficult to realize why this information is useful (I know I struggled with it for a while, thinking big O was a way to measure the “clock time” of my algorithms, but it’s not, it’s just a really broad and general way to compare things while doing as little calculation as possible).

For me, big O is most useful when *talking* about algorithms.

wow! cool… yeah right now I clearly understand… i got your point. thanks a lot.good day always!

Excellent article!