Big O Notation

Fill in the blanks

Big O notation is a way to describe how the efficiency of an algorithm or program changes as the size grows. Imagine you're solving a problem, like finding a specific item in a list. Big O notation helps you understand how the time/space it takes to solve this problem when the list gets larger. It uses a special notation, typically written as "O(something)," where "something" tells you how the problem's complexity grows with the input size. The most common complexities are constant, , quadratic, and .



O(1) means the time/space it takes to solve the problem doesn't depend on the input - it doesn't matter how big the input size is, the time is the same. An example of O(1) is ringing the school bell, where the action occurs only once regardless of the number of pupils. In contrast, O(n) means the time/space it takes increases linearly with the input size. For example, queuing for the school canteen demonstrates O(n) as the serving time will double if the student population doubles.



When analyzing , we also discuss other complexities, such as O(n^2), which indicates that the time required quadruples when the input size doubles, the classic example being classroom introductions. Moreover, O(log N) is very efficient, as a growing input size does not substantially affect the time an algorithm takes, similar to how quickly gossip spreads in a school.



Finally, it's crucial to consider the case, average case, and worst scenarios when evaluating time and space complexity. The best-case scenario refers to optimal performance requiring the least resources, while the average case represents expected performance across various inputs. Conversely, the worst-case scenario denotes the least efficient performance requiring the most resources or time.

Keywords

best | case | efficiencies | increases | size | input | linear | logarithmic |