import typetraits
var i = 1
var f = 1.01
echo i.type.name # int
echo f.type.name # float
echo 1.type.name # int
echo 1.01.type.name # float
echo 1 + 1.01 # ok
echo i + f # compile-time error
# Same for:
# var f2: float = i + f
Why shouldn't it be possible to compile this? Is there any rationale/description of this behavior in the manuals?
echo i.float + f
echo i + f.int
for 1 + 1.01 it only works because 1 is then assumed to be a float literal.
converter floatToInt(f: float): int = f.int
echo i + f
b) @def: In C/C++/C#/D we got rules for this: use the "more advanced" data type (which would be float in this case). I don't see how such a default would break (which you didn't imply though) anything (the more as 1 + 1.01 works that way). Also - in the case of "var f2: float ..." the result type was specified, and there shouldn't be any ambiguity.
I write numerical code daily, and I cannot underappreciate the way Nim handles this. Forcing the conversion to be explicit is a huge gain in readability!
I cannot enumerate all the bugs we have had in our C/C++ codes because of automatic conversions between integers and floating-point numbers. In the C++ code I'm working right now there are many pearls like the following one:
int detectorIdsSize;
int odNumber;
int sizeMPI;
// ...
int step = static_cast<int>(floor(static_cast<double>(detectorIdsSize * odNumber / sizeMPI))) + 1;
(this is just the first result I got using grep "static_cast.*double.*" *.cpp in the source directory!). Obviously, whoever wrote this code was very frightened by C++ automatic conversion rules!