Computational Complexity Review
- Algorithms are fundamental to Machine Learning, so we care about how the algorithms behave, and the way that the amount of resources grow with respect to input size.
- Use Computational / Algorithmic Complexity to express how an algorithm’s resource requirements scale with the problem size.
- Use x to denote problem size.
- Suppose f(x) is the running time of the algorithm expressed as a function of the problem siaze.
- Use O(f(x)) to denote the upper bound (worst case) running time of the algorithm
- Use Ω(f(x)) to denote the lower bound (best case) running time of the algorithm
- Use Θ(f(x)) to denote the tight bound running time of the algorithm.
- Θ(f(x)) exists iff O(f(x))==Ω(f(x)).
- These bounds are asymptotic (describes the time complexity behaviour as x→∞).
- Remove constants and lower order terms, only care about the dominant component of the bound.
- Expressions that grow slowly are preferred
- E.g. Polynomial or smaller
- Worse than polynomial → intractable → class of NP-Hard problems
- Algorithms with all kinds of complexities are used in ML
- Asymptotic bounds are useful but often don’t give us the full picture in practice, especially when it comes to the speed of algorithms with smaller input sizes.