Approximating Using Everything But
14.1.2023
A fun way to spend in hour is to approximate some number in a silly way. Say we want to approximate ,
a nice irrational number, but we only know the value of for . Let's suppose also that we can
only store one value of at a time (so, for instance, we can't just calculate ).
Maybe the simplest way to do this is to just evaluate for or something like that. And while
that would probably get a pretty decent approximation, it's no fun. Instead, we can be a little more creative :)
Notice that
So if we can get a decimal approximation of this integral, we can get . So basically we just need to
calculate the area under the curve between and .
So let's approximate the area under this curve with . We know that , so let's
confine our view of the curve to a 0.5 by 3 box. Now we can do something fun:
we can just throw darts at our box. And if we're bad
enough at playing darts, we might expect that our darts land randomly and uniformly distributed within our box. That
means that, after we throw a lot of darts, we can count them all up and notice the ratio between those that landed in
the shaded area and the total thrown. In other words, as we throw more and more darts,
So then
So now all we need to do is throw a bunch of darts at the box and count how many land in the shaded region! But, although I'm
bad enough at darts to have a decently uniform distribution on the box, my hand starts cramping after a few million throws. So
we'll just hire a Python script to do all the work for us1:
SAMPLE = 10000000
shaded = 0
for i in range(SAMPLE):
x = random.uniform(0.0, 0.5)
y = random.uniform(0.0, 3)
if y < math.e ** x:
shaded += 1
return 3 * shaded / (2 * SAMPLE) + 1
The full code can be found here. With ten-million throws, this approximation is usually
accurate to 3 decimal places (not too bad!). This method has a bunch of limitations, like being constrained to floating point
representation and Python's pseudo-random generation.
The most popular way of approximating , especially in setting world records
for longest approximation, is through approximating
the Taylor series expansion with some added optimizations. Even with a quick unoptimized script on my
Raspberry Pi, this method can match the true value of to as many decimal places a Python float can store with just the first 20 terms of
the Taylor series and within a few seconds. And indeed we can apply this to by either applying a square root algorithm to the previously calculated
or by calculating . Either way, this would converge much much faster to than the nondeterministic
Monte Carlo method above. And so for all practical purposes, it's probably not useful at all to calculate this constant probabilistically, but
who cares it was fun :)
Footnotes
- Okay so technically the
random.uniform
function in Python includes both arguments in the range, and
in particular means that it's possible for to be chosen as value for . But nothing would really change if there were
a (more appropriate) function that excluded its arguments in the context of this problem.↵