# Big O notation

Today I will discuss an important topic that is useful to know for your program but also more importantly it is useful to know for your job interview. According to Wikipedia big-o notation in computer science is used to classify algorithms according to how their run time or space requirements grow as the input size grows. It is essentially how you will talk about and explain your algorithms and how efficient they are as they grow with time “time complexity”. For example how much an algorithm will slow down if you gave it a list of 10 versus a list of 10000. Below I will talk about the common types of big-o notation and exampls.

# Constant time “O(1)”

This means that the algorithm’s run time will be constant no matter the size of inputs. A good example would be setting bookmarks in your browser, this is because no matter the size of the page that you are going to the bookmark will take you directly to that page in a single step. Other great examples of constant time big-o are math operations, accessing a hash using a key, push/pop from a stack, returning a value from a function. As you will see in the function below it is a big-0 of O(1) complexity because no matter how many numbers are in the list you are always just returning the number at first index.

# Linear Time “O(n)”

This means the run time will increase as input increases, you can think about this like working out. Let us say the time to do one rep of an exercise takes 15 seconds, but as you, if you were to do 3 sets then it would take you 45 seconds. One of the most common operations you will see in programming is transversing an array, this would be methods like forEach, map, and any other method that loops through an array from start to finish. As the example below.

This means that the size of the input data is squared, some examples of algorithms that have quadratic time are bubble sort, insertion sort, and selection sort. Below we will look at an example.

As we look at this example, we see in the function we have two nested loops that increment after each iteration. If we call the function on smallList our n will be 16 operations. We can see how quickly that can add up so imagine what n will be on the function bigList or something even bigger. An array with only one thousand elements ends up creating one million operations, so that gives you an idea.

# Logarithmic Time”O(log n)”

This means that as the input size increase the running time grows along with it in proportion. This makes run time barely increase as you increase input. You can think of this as taking a book full of names in alaphabetical order and looking for the name Samuel, so you take the book and open it half way and you are at Mary. You say to yourself that S is after M so now you can go from half of that and so on and so on till you find the name Samuel. You can see how this is much quicker than going from the start of the book page by page till you get to the name Samuel or what ever other name you are looking for. The most used examle of logarithmic time is a binary search operation, we also see it in merge sort, time sort, and heap sort. Below we will see an example.

Just to give you an idea of how fast logarathmic big-o is if we have a n size of one million we will only return 2 operations thats a big difference from the quadratic big-o.

Conclusion

So at the end of the day you see how important big-o can be, do you nessicaraly need it ot be able to code your project at first no. But if you want to refactor or make it scalable and design someting great from scratch you can see that knowing the big-o can be a very useful tool and help make your app run quicker and smoother.