O-notation (asymptotic notation)
It should be clear
that trying to express runtimes in terms of an exact number of clock cycles is
unwieldy:
·
It
gives us some potentially very nasty formulae for runtimes
·
It’s
architecture dependent (more importantly)
So instead, we try
to find a shorthand measure of runtime, whereby we place an upper bound
on the runtime as the input size grows very large (or tends to infinity
to use the mathematical expressions)
For example,
considering 34 + 12n + 23n2, the first two terms become negligibly
small compared to 23n2 as n becomes very large.
O-notation, also known as
asymptotic notation, is a way of expressing the time complexity or space
complexity of an algorithm. It gives us a rough idea of how the runtime or
memory usage of an algorithm scales with the size of the input.
O-notation is usually expressed
using big O notation, which gives an upper bound on the time or space complexity
of an algorithm. For example, if an algorithm has a time complexity of O(n),
this means that the runtime of the algorithm grows at most linearly with the
size of the input. This is a rough estimate, and the actual runtime of the
algorithm may be faster.
There are several other
variations of O-notation, including omega notation (Ω) and theta notation (Θ),
which give lower and exact bounds on the time or space complexity of an
algorithm, respectively.
O-notation is useful because it
allows us to compare the time or space complexity of different algorithms, even
if they have different constants. For example, an algorithm with a time
complexity of O(n^2) will generally be slower than an algorithm with a time
complexity of O(n), even if the constants in the O-notation expressions are
different.
It is important to note that
O-notation only gives a rough estimate of the time or space complexity of an
algorithm, and the actual runtime or memory usage of an algorithm may differ
from the O-notation estimate.
To enable a more
effective and concise way to compare and discuss runtimes, we use the following
intuition.
·
A
function f(n) is said to be order of g(n) or O(g(n))
if for all values of n greater than c, f(n) < k
* g(n)
o c and k are constants
·
So
given our previous example, the worst case runtime 34 + 12n + 23n2
can be called O(n2), since above a large enough value
of n, 24n2 is always greater
·
Informally,
we just pick the term which grows fastest relative to n, as this will
always become dominant when n gets large enough.
Exercise
Think of an
algorithm to check if two strings are anagrams of one another.
Are some methods
faster than others?
Use O-notation to
try to prove this.
One possible algorithm to check if two strings are
anagrams of one another is as follows:
1.
Sort
the characters in each string.
2.
Compare
the sorted strings to see if they are equal.
This algorithm has
a time complexity of O(n log n), where n is the length of the strings. This is
because the time complexity of sorting the strings is O(n log n), and the time
complexity of comparing the strings is O(n).
Another possible
algorithm is as follows:
1.
Create
a frequency map for each string, which counts the number of occurrences of each
character in the string.
2.
Compare
the frequency maps to see if they are equal.
This algorithm has
a time complexity of O(n), where n is the length of the strings. This is
because the time complexity of creating the frequency maps is O(n), and the
time complexity of comparing the maps is O(n).
Therefore, the
second algorithm is faster than the first algorithm, in terms of time
complexity. However, the actual runtime of the algorithms may differ depending
on the specific implementation and the input data.

No comments:
Post a Comment