I am learning about Big O Notation running times and amortized times. I understand the notion of O(n) linear time, meaning that the size of the input affects the growth of the algorithm proportionally…and the same goes for, for example, quadratic time O(n^{2}) etc..even algorithms, such as permutation generators, with O(n!) times, that grow by factorials.
For example, the following function is O(n) because the algorithm grows in proportion to its input n:
f(int n) {
int i;
for (i = 0; i < n; ++i)
printf("%d", i);
}
Similarly, if there was a nested loop, the time would be O(n^{2}).
But what exactly is O(log n)? For example, what does it mean to say that the height of a complete binary tree is O(log n)?
I do know (maybe not in great detail) what Logarithm is, in the sense that: log_{10} 100 = 2, but I cannot understand how to identify a function with a logarithmic time.
17
I cannot understand how to identify a function with a log time.
The most common attributes of logarithmic runningtime function are that:
 the choice of the next element on which to perform some action is one of several possibilities, and
 only one will need to be chosen.
or
 the elements on which the action is performed are digits of n
This is why, for example, looking up people in a phone book is O(log n). You don’t need to check every person in the phone book to find the right one; instead, you can simply divideandconquer by looking based on where their name is alphabetically, and in every section you only need to explore a subset of each section before you eventually find someone’s phone number.
Of course, a bigger phone book will still take you a longer time, but it won’t grow as quickly as the proportional increase in the additional size.
We can expand the phone book example to compare other kinds of operations and their running time. We will assume our phone book has businesses (the “Yellow Pages”) which have unique names and people (the “White Pages”) which may not have unique names. A phone number is assigned to at most one person or business. We will also assume that it takes constant time to flip to a specific page.
Here are the running times of some operations we might perform on the phone book, from fastest to slowest:
O(1) (in the worst case): Given the page that a business’s name is on and the business name, find the phone number.
O(1) (in the average case): Given the page that a person’s name is on and their name, find the phone number.
O(log n): Given a person’s name, find the phone number by picking a random point about halfway through the part of the book you haven’t searched yet, then checking to see whether the person’s name is at that point. Then repeat the process about halfway through the part of the book where the person’s name lies. (This is a binary search for a person’s name.)
O(n): Find all people whose phone numbers contain the digit “5”.
O(n): Given a phone number, find the person or business with that number.
O(n log n): There was a mixup at the printer’s office, and our phone book had all its pages inserted in a random order. Fix the ordering so that it’s correct by looking at the first name on each page and then putting that page in the appropriate spot in a new, empty phone book.
For the below examples, we’re now at the printer’s office. Phone books are waiting to be mailed to each resident or business, and there’s a sticker on each phone book identifying where it should be mailed to. Every person or business gets one phone book.
O(n log n): We want to personalize the phone book, so we’re going to find each person or business’s name in their designated copy, then circle their name in the book and write a short thankyou note for their patronage.
O(n^{2}): A mistake occurred at the office, and every entry in each of the phone books has an extra “0” at the end of the phone number. Take some whiteout and remove each zero.
O(n · n!): We’re ready to load the phonebooks onto the shipping dock. Unfortunately, the robot that was supposed to load the books has gone haywire: it’s putting the books onto the truck in a random order! Even worse, it loads all the books onto the truck, then checks to see if they’re in the right order, and if not, it unloads them and starts over. (This is the dreaded bogo sort.)
O(n^{n}): You fix the robot so that it’s loading things correctly. The next day, one of your coworkers plays a prank on you and wires the loading dock robot to the automated printing systems. Every time the robot goes to load an original book, the factory printer makes a duplicate run of all the phonebooks! Fortunately, the robot’s bugdetection systems are sophisticated enough that the robot doesn’t try printing even more copies when it encounters a duplicate book for loading, but it still has to load every original and duplicate book that’s been printed.
37
 96
@cletus: Coincidental, I’m afraid. I picked it because phonebooks have a large N, people understand what they are and what they do, and because it’s versatile as an example. Plus I got to use robots in my explanation! A win allaround. (Also, it looks like your answer was made before I was even a member on StackOverflow to begin with!)
Feb 23, 2010 at 0:40
 19
“A mistake occurred at the office, and every entry in each of the phone books has an extra “0” at the end of the phone number. Take some whiteout and remove each zero.” <– this is not order N squared. N is defined as the size of the input. The size of the input is the number of phone numbers, which is the number of numbers per book times the number of books. That’s still a linear time operation.
Apr 10, 2010 at 17:13
 29
@Billy: In this example,
N
is the number of people in a single book. Because every person in the phone book also gets their own copy of the book, there areN
identical phone books, each withN
people in it, which is O(N^2).Apr 10, 2010 at 17:32
 55
Isn’t O(1) the best case, rather than worst case as it is strangely highlighted as?
– SvipMay 26, 2013 at 8:29
 11
How incredible is it that this answer has been accepted to the original question? This answer does not explain in my opinion what O(log n) time really is: you only give a lot of examples (also to things that have almost nothing to do with the question) without explaining exactly (always) why the complexity of those operations are really correct. Incredible!
– nbroAug 12, 2015 at 13:55
O(log N)
basically means time goes up linearly while the n
goes up exponentially. So if it takes 1
second to compute 10
elements, it will take 2
seconds to compute 100
elements, 3
seconds to compute 1000
elements, and so on.
It is O(log n)
when we do divide and conquer type of algorithms e.g binary search. Another example is quick sort where each time we divide the array into two parts and each time it takes O(N)
time to find a pivot element. Hence it N O(log N)
9
 201
Three lines of wisdom that beats all other essay answers… 🙂 Just in case somebody is missing it, in programming context, the base of log is 2 (not 10), so O(log n) scales like 1 sec for 10 elements, 2 sec for 20, 3 for 40 etc.
– nawfalMay 23, 2014 at 10:32
 4
Agreed, concise and clear, although the ending question from the OP was how to identify a logarithmic function, not quite “what is it”
– AdamJul 31, 2014 at 17:46
 7
yes, logarithmic function is it’s inverse to exponential function. ((log x) base a) is inverse of (a power x). Qualitative analysis of these functions with graphs would give more intuition.
Oct 16, 2014 at 1:02
 13
This took me about 3 readthroughs to realize it wasn’t wrong. Time goes up linearly, while the number of elements is exponential. This means more elements during less time. This is mentally taxing for those that visualize
log
as the familiar log curve on a graph.Feb 15, 2017 at 9:30
 4
@nawfal That’s not quite right. When you use the notation O(log n), the base of the log is irrelevant because all log functions are proportional to each other. That is why you omit the base, which is not the case with exponentials (e.g. O(2^n) is not the same as O(10^n)). So even in the context of base2, the scaling behaviour you described is not generally correct – it depends on the constant factor.
– Mo B.Sep 29, 2020 at 15:35
Many good answers have already been posted to this question, but I believe we really are missing an important one – namely, the illustrated answer.
What does it mean to say that the height of a complete binary tree is O(log n)?
The following drawing depicts a binary tree. Notice how each level contains double the number of nodes compared to the level above (hence binary):
Binary search is an example with complexity O(log n)
. Let’s say that the nodes in the bottom level of the tree in figure 1 represents items in some sorted collection. Binary search is a divideandconquer algorithm, and the drawing shows how we will need (at most) 4 comparisons to find the record we are searching for in this 16 item dataset.
Assume we had instead a dataset with 32 elements. Continue the drawing above to find that we will now need 5 comparisons to find what we are searching for, as the tree has only grown one level deeper when we multiplied the amount of data. As a result, the complexity of the algorithm can be described as a logarithmic order.
Plotting log(n)
on a plain piece of paper, will result in a graph where the rise of the curve decelerates as n
increases:
7
 75
“Notice how each level contains the double number of nodes compared to the level above (hence binary)” This is incorrect. What you’re describing is a balanced binary tree. A binary tree just means each node has at most two children.
– OenotriaMay 27, 2013 at 18:28
 9
In fact, it’s a very special balanced binary tree, called a complete binary tree. I’ve edited the answer but need someone to approve it.
Dec 14, 2013 at 3:44
 5
A complete binary tree doesn’t need to have the last level to be completely filled. I would say, a ‘full binary tree’ is more appropriate.
– Mr. AJJul 29, 2014 at 1:21
Your answer tries to respond more concretely to the original problem of the OP, so it is better than the current accepted answer (IMO), but it is still very incomplete: you just give a half example and 2 images…
– nbroAug 12, 2015 at 14:04
 2
This tree has 31 items in it, not 16. Why is it called a 16 item data set? Every node on it represents a number, otherwise it would be an inefficient binary tree 😛
Nov 23, 2016 at 21:36
A 1node binary tree has height log2(1)+1 = 1, a 2node tree has height log2(2)+1 = 2, a 4node tree has height log2(4)+1 = 3, and so on. An nnode tree has height log2(n)+1, so adding nodes to the tree causes its average height to grow logarithmically.
Feb 21, 2010 at 22:40
One thing I’m seeing in most answers is that they essentially describe “O(something)” means the running time of the algorithm grows in proportion to “something”. Given that you asked for “exact meaning” of “O(log n)”, it’s not true. That’s the intuitive description of BigTheta notation, not BigO. O(log n) intuitively means the running time grows at most proportional to “log n”: stackoverflow.com/questions/471199/…
Feb 22, 2010 at 10:42
I always remember divide and conquer as the example for O(log n)
Feb 23, 2010 at 13:23
It’s important to realize that its log base 2 (not base 10). This is because at each step in an algorithm, you remove half of your remaining choices. In computer science we almost always deal with log base 2 because we can ignore constants. However there are some exceptions (i.e. Quad Tree run times are log base 4)
May 27, 2013 at 1:33
@Ethan: It doesn’t matter which base you are in, since base conversion is just a constant multiplication, The formula is log_b(x) = log_d(x) / log_d(b). Log_d(b) will just be a constant.
May 27, 2013 at 1:46

Show 12 more comments