Big-O notation is commonly used to describe the time & space complexity of algorithms. While not applicable to everything, it is very useful to compare different algorithm’s implementations which achieve the same result, for example sorting algorithms.

It is written as O(f(x))\mathcal O(f(x)), with f(x)f(x) being a function that calculates the complexity. For example, O(logn)\mathcal{O}(\log n) with nn being the input size of, e.g., an array, and logn\log n being the mathematical function that results in a number describing the complexity of the given input. You can also plot these functions on a graph:

n{\color{#bf616a} n}  logn{\color{#a3be8c} \log n}  n2{\color{#88c0d0} n^2}

The graph shows three different functions. Ideally, you want everything to happen in constant time (O(1)\mathcal O(1)), but since that’s not possible for a lot of applications, the general goal is to find and algorithm with a low gradient. Or in simpler words: An algorithm, which’s Big-O is low.

In the following, you will see a few common examples describing the time complexity of algorithms:

//Example #1
function firstEl(array) {
    return array[0];

This function has a time complexity of O(1)\mathcal O(1), because an array-lookup isn’t dependent on the array size, meaning it will always take one unit of time, no matter how big the array is.

//Example #2
function printArrayAndReturnLastEl(array) {
    for(let i = 0; i < array.length; i++) {
    return array[array.length - 1];

The second example has a time complexity of O(n)\mathcal O(n), because the runtime of the loop is dependent on the input size. You might wonder why it’s not written as O(n+1)\mathcal O(n+1), since we defined the array lookup as taking one unit of time. That’s because for simplicities’ sake, you ignore constants in the notation. Once you have an input size of multiple hundreds it really doesn’t matter if there is a single lookup more.

There are two general rules to follow:

  1. Constants should be ignored
    • 5nO(n)5n \rightarrow \mathcal{O}(n)
  2. Higher order terms are prioritized over low order terms
    • O(1)<O(logn)<O(n)<O(nlogn)<O(n2)<O(2n)<O(n!)\mathcal{O}(1) < \mathcal{O}(\log n) < \mathcal{O}(n) < \mathcal{O}(n\log n) < \mathcal{O}(n^{2}) < \mathcal{O}(2^{n}) < \mathcal{O}(n!)

Now that you know these rules, try to think about the Big-O of the following function:

function bar(array) {
	for(let i = 0; i < array.length; i++) {
	    array[i] += 1;

    for(let i = 0; i < array.length; i++) {
        for(let j = 0; j < array.length; j++) {
            if(array[i] + array[j] === 10) return "Found it."

Once you found it, you can scroll down.

O(n2)\huge \mathcal O(n^2)

That’s because the first loop is nn, then there are two loops nested into each other, thus n2n^2, but since n2n^2 is of a higher order, nn is ignored.

Most of the time, you also assume the worst case scenario of your algorithm.

function foo(array) {
    for(let i = 0; i < array.length; i++) {
        if(array[i] = 5) return "Found the five";

If this function has a 55 as the first element, it would just take O(1)\mathcal O(1), but if not, it loops over every element until it finds a 55. Thus, this is also defined as O(n)\mathcal O(n).

The rules are the same for space complexity, but instead of looking at the runtime, it deals with the memory requirements. For example, how often items need to be copied around (which should be avoided!).

I hope you learned something new ;)