I was working on a project, got distracted, and somehow ended up making a tutorial on how to implement logBASE in pure nim. (The link is here)
The end result is this:
from std/math import pow, almostEqual, E
proc ln(n: float64): float64 =
var next: float64 = 1.0
while not almostEqual(result, next, 1):
result = next
next = result - ((pow(E, result) - n) / pow(E, result))
echo ln(5)
proc log(n, base: float64): float64 =
var next: float64 = 1.0
while not almostEqual(result, next, 1):
result = next
let top = pow(base, result) - n
let bottom = pow(base, result) * ln(base)
next = result - (top / bottom)
echo log(5, 10)
I was creating this while trying to create logarithm functions using other kinds of decimals/fractions other than floats, and realized that std/math just uses c bindings.
I hope this helps someone in some way :)
import std/monotimes
from std/math import pow, almostEqual, E, log2
from std/fenv import epsilon
proc simpleNewton(n: float64): float64 =
var next: float64 = 1.0
while not almostEqual(result, next, 1):
result = next
next = result - 1 + n/pow(2, result)
proc binarySearch(n: float64): float64 =
var x = n
var c = 1.0
while c > epsilon(float64):
if x >= 2:
result += c
x /= 2
elif x <= 1/2:
result -= c
x *= 2
else:
x = x*x
c /= 2
var t0 = high(int)
template timeit(name, code) =
block:
let s = getMonoTime()
var ans = 0.0
for i in 1..1000:
ans += code(i.float)
let e = getMonoTime()
let t = e.ticks - s.ticks
t0 = min(t0, t)
# echo t
echo name, ": time=", t, "\t(x", t/t0, ")\tans=", ans
timeit "math.log2 ", log2
timeit "binarySearch", binarySearch
timeit "simpleNewton", simpleNewton
# nim r -d:release play_log.nim
# math.log : time=28050 (x1.0) ans=8529.398004204777
# binarySearch: time=762358 (x27.17853832442068) ans=8529.398004204777
# simpleNewton: time=16559248 (x590.34752228164) ans=8529.398004204781
There is no need to compute pow(E, result) twice.
Yes, that is small thing that fixing would probably increase the efficiency, I just didn’t get my thought process there while making the tutorial. Thanks for the feedback :)
pow should not be used, not to say inside while loop. Also, implementation of pow often calls log internally.
Could please explain the first part more? And the second part is inevitable when I can’t read or find c source code very well. The main reason I created this was to do things outside of C.
For 2, 4 and 5 the goal of this tutorial is not how to calculate logBASE fast, I’d just prefer if it was not slow. I prefer it to be clean, and my method makes more sense to be in general than any of the other binary stuff. I can’t be teaching others what I can’t wrap my own head around, although I do agree I should eventually learn this kind of stuff. Also, for number 5 afaik that is a log2 proc, not a logBASE and wondering if it could give the same speed and accuracy for other bases?
First and foremost, thanks for the effort of making tutorials.
I will explain the binary search method in this post.
To get an intiution on how the algorithm work, let's consider only x >= 1 first. And I assume readers should already know the following equalities.
log2(x) = 1 + log2(x/2) ---- (1)
log2(x) = 0.5 * log2(x^2) ---- (2)
The core of the algorithm is how to exploit these recursive structure of log2(x).
The process is that
Let's see an example
log2(3.2)
= 1 + log2(1.6) by (1)
= 1 + 0.5 * log2(2.56) by (2)
= 1 + 0.5 * (1 + log(1.28)) by (1)
= 1 + 0.5 * (1 + 0.5 * log2(1.6384)) by (2)
= 1 + 0.5 * (1 + 0.5 * 0.5 * log2(2.68435456)) by (2)
= 1 + 0.5 * (1 + 0.5 * 0.5 * (1 + log2(1.34217728))) by (1)
= (1 + 0.5 + 0.5^3) + 0.5^3 * log2(1.34217728) expand
= 1.625 + 0.125 * log2(1.34217728) simplify
Look at how x change, 3.2 -> 1.6 -> 2.56 -> 1.28 -> 1.6384 -> 2.68435456 -> 1.34217728 -> ... We have controled the value of x to oscillate between 1 and 2. Since 1 <= x < 2, so 0 <= log(x) < 1, so we can say that
1.625 + 0.125 * 0 <= log2(3.2) < 1.625 + 0.125 * 1
1.625 <= log2(3.2) < 1.75
And notice that during the above process, After applying (2), a co-efficent 0.5 keep multiply to log2(...), so the bound of log2(3.2) is keep halving. If we want a bound to be smaller than B, we can continue the above process until the coefficient is smaller than B.
Code the above process
const B = 0.0001
# This equality holds during the process
# log2(3.2) = y + c * log2(x)
var x = 3.2
var c = 1.0
var y = 0.0
while c > B:
if x >= 2:
x /= 2
y += c
else:
x *= x
c /= 2
echo "so, ", y, " <= log2(3.2) < ", y + B
# so, 1.677978515625 <= log2(3.2) < 1.678078515625
I hope readers can follow up to this point. If you do understand the case x >= 1, it is similar for 0 <= x < 1. We make use another equality
log2(x) = -1 + log2(x*2) ---- (3)
The process now become
2. otherwise (i.e. 0.5 <= x < 1), apply (2), make x to be x*x so the process keeps x between 0.5 <= x < 1 and so -1 <= log2(x) < 0 and so y - c <= log2(x) < y
combine both cases, make the bound to be smallest (i.e. epsilon for floating points). we get
func log2(n: float): float =
var x = n
var c = 1.0
while c >= epsilon(float64):
if x >= 2:
x /= 2
result += c
elif x <= 0.5:
x *= 2
result -= c
else:
x *= x
c /= 2
For other bases, precompile the coefficient or just use log(n,b) = log(n)/log(b)
const inv_log2_10 = 1/log2(10)
func log10(n: float) = inv_log2_10 * log2(n)
func log(n, b: float) = log2(n)/log2(b)
It is also possible to modify log2 for other base like the following,
func log(n, b: float): float =
let inv_b = 1/b
var x = n
var c = 1.0
while c >= epsilon(float64):
if x >= b:
x *= inv_b
result += c
elif x <= inv_b:
x *= b
result -= c
else:
x *= x
c /= 2
My feeling is that it may be numerically-unstable for some extreme values of base. but in theory floating point multiplication do not lose significance (or very slowly), x should keep enough significance during whole process, so it should be fine.
Last, binary search method gains 1-bit accuracy per iteration, the accuracy is arbitrary and it is suitable for decmial/fraction (not only floating point), but notice that due to squaring x, the size of x grow exponentially in naive implementation.