It might seem like a simple question: when we want to compute mathematical constants or functions to a certain precision, like $pi$, $sin(x)$, $e^x$, or $mathrm{erf}(x)$, how much time does it actually take? In computer science, especially in theoretical fields, this question surprisingly often goes unaddressed. Algorithm descriptions frequently gloss over the computational cost of these “calculator functions,” assuming they are trivial. For instance, one might see a step like “Generate two Gaussian random variables $Z$ and $Z’$ and check if $sin(Z cdot Z’) > 1/pi$,” without delving into the time required for these operations.
However, from a rigorous perspective, understanding the time complexity is crucial. We naturally expect that these computations should be efficient. A common intuition, as shared by an expert colleague, is that these “calculator functions” likely take time at most polynomial in $t$, where $t$ is the desired number of bits of precision. But what polynomial exactly? And where can we find reliable references to confirm this and understand the specifics?
This quest for clarity can be surprisingly challenging. One might assume that numerical analysis textbooks would readily provide these answers. Yet, often, numerical analysis focuses on achieving fixed precision, such as 32 or 64 bits, rather than arbitrary precision. It seems that the computational time for reaching arbitrary precision, which is essential in symbolic computation and certain theoretical analyses, is less emphasized in standard texts.
However, systems like Maple, with commands like Digits := 5000; erf(1.0);
, demonstrate that arbitrary precision calculations are indeed possible and practically implemented. This capability implies that algorithms for arbitrary precision calculations must exist and be reasonably efficient.
Seeking answers, the keyword “unrestricted algorithm” proved to be a breakthrough, leading to the paper “An unrestricted algorithm for the exponential function” by Clenshaw and Olver (1980). This paper dives deep into the time complexity of $e^x$, expressing it in terms of multiple parameters and suggesting a complexity around $tilde{O}(t^2)$ for constant $|x|$. It’s quite detailed work for what seems like a basic function!
For the error function, $mathrm{erf}(x)$, the paper “The functions erf and erfc computed with arbitrary precision” by Chevillard (2009) emerged. This paper appears more accessible and suggests a time complexity around $tilde{O}(t^{3/2})$. It’s noteworthy that research into the precise time complexity of even fundamental functions like erf(x) was still active relatively recently.
One of the key motivations for understanding the time complexity of $mathrm{erf}(x)$ comes from the need to generate Gaussian random variables with high precision. Specifically, question #5 in the original query asks about the time and random bits needed to generate a discrete random variable $X$ that is very close to a standard Gaussian $Z sim N(0,1)$. The efficiency of computing $mathrm{erf}(x)$ directly impacts the efficiency of generating such Gaussian approximations.
While the polynomial time complexity for these functions seems plausible and is often assumed in theoretical computer science, pinpointing the exact polynomial and finding definitive references requires deeper investigation. The works by Clenshaw-Olver and Chevillard provide valuable insights, but the journey to fully understand “What Is The Time” for these fundamental computations to arbitrary precision highlights a gap between practical computation and readily accessible theoretical analysis in this domain.
References
- Clenshaw, C. W., & Olver, F. W. J. (1980). An unrestricted algorithm for the exponential function. SIAM Journal on Numerical Analysis, 17(2), 310-331.
- Chevillard, P., Lauter, K., & Monniaux, D. (2009). The functions erf and erfc computed with arbitrary precision. ACM Transactions on Mathematical Software (TOMS), 36(3), 1-20.