Hello all,
When compiling (with development version of Nim) a module which uses the "round" function from the "math" module, more precisely the "round" function with two arguments (the second one being the number of positions after decimal point), I got a deprecation warning. The recommended way to round to some decimal position is now to use "format".
I looked at "math.nim" to better understand the reason of this deprecation and it is said that the function is not reliable because there is no way to represent exactly the rounded value as a float. I was aware of this, of course, but I don’t see how using "format" could be better. As I don’t want a string but a float, I would have to convert the exact string representation to a float, losing precision in the operation.
I have done some comparisons to check if, for some value of "x", I could get a difference between x.round(2), x.formatFloat(ffDecimal, 2).parseFloat() and the expected result (as a float). I failed to find one, but, of course, I cannot be sure that there will never exist a difference.
So, I would like to know if there is an example or a theoretical proof which shows that the way "round" works (multiplying, rounding, dividing) may give less precise results than using format, then parsing the string to a float (which will need several floating point operations). Because, to round a float to two digits after decimal point for instance, it seems rather overkill to convert to a string (with rounding) then convert back to a float, when one can simply multiply by 100, round to integer then divide by 100.
We decided that this variant of round is almost never what you should use. The stdlib needs to avoid procs that trick you into programming bugs. If you really need it, use this code:
proc myRound*[T: float32|float64](x: T, places: int): T=
if places == 0:
result = round(x)
else:
var mult = pow(10.0, places.T)
result = round(x*mult)/mult
(That is the stdlib's implementation.)
Yes, I agree that this function is not generally what is needed. Most of the time, we want a string and "format" is what should be used. I didn’t wanted to discuss the decision, I was just curious to know if there exists situations where it actually gives a wrong result.
Now, in my case, this is not a big deal. I need only rounding to 0, 1 and 2 decimals at three places, so I changed to use explicit code: round(x), round(10 * x) / 10, round(100 * x) / 100.