Friday, 27 January 2023

Polynomial Time

 


Polynomial Time

The working definition that computer scientists use is that an efficient algorithm must terminate within polynomial time. There are exceptions to this rule, which we will discuss later in the course but they are quite rare.

In mathematics, a polynomial function is one of the form:

a + bn + cn2 + dn3 + …    (where n is a variable and a, b, c, d … are all constants)

That is, the addition of a number of terms involving some power of n multiplied by a constant. More intuitively, we can call a polynomial function O(nk), where k is a constant.

In computer science, polynomial time is a measure of the efficiency of an algorithm, where the running time of the algorithm is a polynomial function of the size of the input.

A polynomial function is a mathematical function in which the highest exponent of the variables is a positive integer. For example, the polynomial function x^2 + 2x + 1 has a degree of 2, because the highest exponent of x is 2.

An algorithm with polynomial time complexity is generally considered to be efficient, as it can handle large inputs without a significant increase in running time. However, algorithms with higher degree polynomial time complexity will generally be slower than algorithms with lower degree polynomial time complexity for large input sizes.

An example of an algorithm with polynomial time complexity is the brute force algorithm for the traveling salesman problem, which has a time complexity of O(n^2 2^n), where n is the number of cities. This means that the running time of the algorithm is a polynomial function of the number of cities, with a degree of 2.

In general, polynomial time algorithms are well-suited for problems where the input size may be large, as they can handle large inputs without a significant increase in running time. However, they may not be the most efficient choice for problems with small input sizes, due to the overhead of the polynomial function.

So algorithms with time complexities such as O(n), O(n2), O(log n), O(n log n) O(n3) etc are all considered efficient.

In fact, this is a very generous definition, since this would mean that O(n10023) is considered theoretically efficient but since solutions like this tend not to appear in practice we don’t worry about that too much.

Examples of non-polynomial upper bounds are O(2n), O(n!), O(nn) and strictly speaking, O(nlog n) since the power to which n is raised may grow with the input size.

Algorithms with runtimes having no polynomial upper bound are to be treated with caution.

The first problem (checking if a + b = c) had an efficient solution, with a time complexity of O(1). This means that the running time of the algorithm does not depend on the size of the input and will always be a constant time.

The second problem (checking if all elements of a set S add up to k) had an efficient solution, with a time complexity of O(n), where n is the number of elements in the set S. This means that the running time of the algorithm increases linearly with the size of the input.

The third problem (checking if any subset of a set S adds up to k) did not have an efficient solution, as it requires checking all possible subsets of S, which has a time complexity of O(2^n), where n is the number of elements in the set S. This means that the running time of the algorithm increases exponentially with the size of the input, making it impractical for large inputs.

No comments:

Post a Comment