2
Complexity in Computer Science And its varied type

What is Complexity?

Complexity is used to characterize something with many parts where those parts interact with each other in multiple ways.
or
Complexity is that property of a model which makes it difficult to formulate its overall behavior in a given language, even when given reasonably complete information about its atomic components and their inter-relations.

Varied meanings of complexity

In several scientific fields, "complexity" has a precise meaning:

  • In computational complexity theory, the amounts of resources required for the execution of algorithms is studied.The most popular type of computational complexity are:

Time Complexity

Space Complexity

Time complexity:

           - How much time it takes to compute
             - Measured by a function T(N)
              N  = Size of the input
          T(N) = Time complexity function
     Order of magnitude: How rapidly T(N) grows when N grows
     For example: O(N)  O(log N)  O(N²)  O(2N)

Space complexity:

       - How much memory it takes to compute
         - Measured by a function S(N).
  • In algorithmic information theory, the Kolmogorov complexity (also called descriptive complexity, algorithmic complexity or algorithmic entropy) of a string is the length of the shortest binary program that outputs that string. Different kinds of Kolmogorov complexity are studied:

                      - The uniform complexity 
                      - The Prefix complexity
                      - The  Monotone complexity
                      - The Time bounded kolmogorov complexity
                      - The Space bounded kolmogorov complexity
    
  • In information processing, complexity is a measure of the total number of properties transmitted by an object and detected by an observer. Such a collection of properties is often referred to as a state.

  • In physical systems, complexity is a measure of the probability of the state vector of the system. This should not be confused with entropy; it is a distinct mathematical measure, one in which two distinct states are never conflated and considered equal, as is done for the notion of entropy in statistical mechanics.

  • In mathematics, Krohn–Rhodes complexity is an important topic in the study of finite semi groups and automata's.

  • In software engineering, programming complexity is a measure of the interactions of the various elements of the software.

Complexity of an Algorithm

Complexity of an algorithm is a measure of the amount of time and/or space required by an algorithm for an input of a given size (n).

What effects run time of an algorithm?

(a) computer used, the hardware platform (b) representation of abstract data types (AD T's) (c) efficiency of compiler (d) competence of implementer (programming skills) (e) complexity of underlying algorithm (f) size of the input

Time for an algorithm to run t(n)

A function of input. However, we will attempt to characterize this by the size of the input. We will try and estimate the WORST CASE, and sometimes the BEST CASE, and very rarely the AVERAGE CASE.

What do we measure?

In analyzing an algorithm rather than a piece of code, we will try and predict the number of times "the principle activity" of that algorithm is performed. It means it describe approaches to the study of the performance of algorithm.

For example, if we are analyzing a sorting algorithm we might count the number of comparisons performed, and if it is an algorithm to find some optimal solution, the number of times it evaluates a solution. If it is a graph coloring algorithm we might count the number of times we check that a colored node is compatible with its neighbors.

Worst Case ...the worst-case runtime complexity of the algorithm is the function defined by the maximum number of steps taken on any instance of size a.

Best Case ... the best-case runtime complexity of the algorithm is the function defined by the minimum number of steps taken on any instance of size a.

Average Case ...the average case runtime complexity of the algorithm is the function defined by an average number of steps taken on any instance of size a.

Example. Let us consider an algorithm of sequential searching in an array of size n.

Its worst-case runtime complexity is O(n)
Its best-case runtime complexity is O(1)
Its average case runtime complexity is O(n/2)=O(n)

The Growth rate of t(n)

Suppose the worst case time for algorithm A is

        t(n) = 60*n*n + 5*n + 1

for input of size n.

Assume we have differing machine and compiler combinations, then it is safe to say that

        t(n) = n*n + 5*n/60 + 1/60

That is, we ignore the coefficient that is applied to the most significant (dominating) term in t(n). Consequently this only affects the "units" in which we measure. It does not affect how the worst case time grows with n (input size) but only the units in which we measure worst case time under these assumptions we can say ...

                   "t(n) grows like n*n as n increases"
                                          or 
                                t(n) = O(n*n)

which reads "t(n) is of the order n squared" or as "t(n) is big-oh n squared"

Applications of Complexity

Computational complexity theory is the study of the complexity of problems—that is, the difficulty of solving them. Problems can be classified by complexity class according to the time it takes for an algorithm—usually a computer program—to solve them as a function of the problem size. Some problems are difficult to solve, while others are easy.

For example, some difficult problems need algorithms that take an exponential amount of time in terms of the size of the problem to solve. Take the traveling salesman problem, it can be solved in time O(n^2 2^n) (where n is the size of the network to visit—let's say the number of cities the traveling salesman must visit exactly once). As the size of the network of cities grows, the time needed to find the route grows (more than) exponentially.

Even though a problem may be computationally solvable in principle, in actual practice it may not be that simple. These problems might require large amounts of time or an inordinate amount of space. Computational complexity may be approached from many different aspects.

Computational complexity can be investigated on the basis of time, memory or other resources used to solve the problem. Time and space are two of the most important and popular considerations when problems of complexity are analyzed.

Notifications

?