Discussion
How hard is it to solve the following three problems:
- Given
three integers a,b and c - find out if a + b
= c.
- Given
a set of integers S = {a,b,c….} and an integer k, find out
if all elements of S add up to k
- Given
a set of integers S = {a,b,c….} and an integer k, find out
if any subset of S adds up to k
Can you think of a way of solving each problem?
Are some solutions likely to be faster than others? How do
we define ‘fast’?
The first problem is very simple to solve and can be done in
O(1) time, as it only requires you to check if a+b equals c.
The second problem can also be easily solved in O(n) time,
where n is the number of elements in the set S. You can simply iterate through
the elements of the set and add them up, then check if the sum equals k.
The third problem is more challenging and is known as the
"subset sum problem." One way to solve it is to use a brute-force
approach, where you enumerate all possible subsets of S and check if any of
them add up to k. This solution would have a time complexity of O(2^n), where n
is the number of elements in the set S.
There may be faster solutions to the third problem depending
on the specific constraints of the problem and the nature of the data. For
example, if the elements of the set S are all positive, you can use a dynamic
programming approach to solve the problem in O(n*k) time, where k is the target
sum.
In general, we define an algorithm to be "fast" if
it can solve a problem in a reasonable amount of time, given the size of the
input data. The time complexity of an algorithm gives us a way to measure how
the running time of the algorithm scales with the size of the input data. An
algorithm with a lower time complexity will generally be faster than one with a
higher time complexity, for large inputs.
Runtimes
The runtime of our program depends on two things:
●
The number of data items to be processed
○
In the previous example, the more items there
are, the longer the program loops
○
Runtimes must be expressed as a function of the
input size (usually denoted by n)
●
The values of the data items themselves
○
Note that some sections of code only execute
when data items meet certain conditions
○
Also, we may terminate early if we’ve found the
answer without processing all the data.
○
Consequently, we tend to analyse the worst-case
runtime for any n data items.
So the worst case runtime of the program on the previous
page might be something like 34 + 12n + 23n2 (measured
in clock cycles).
The runtime of an
algorithm is the amount of time it takes for the algorithm to complete its
task, given a specific input size. The time complexity of an algorithm is a
measure of how the runtime of the algorithm scales with the size of the input.
For example,
consider an algorithm that sorts a list of numbers. The time complexity of the
algorithm tells us how the running time of the algorithm increases as the size
of the input list grows. If the time complexity of the algorithm is O(n), this
means that the runtime of the algorithm increases linearly with the size of the
input. For example, if the input size doubles, the runtime of the algorithm
will also approximately double.
In general, we want
algorithms to have low time complexity, so that they can solve problems
efficiently even for large inputs. Algorithms with higher time complexity may
take too long to complete for very large inputs, making them impractical to use
in many situations.

No comments:
Post a Comment