Hello,
A thing that confuses me when writing game code is using float32 types correctly (so it does compile) while avoiding too many explicit 'to float32' conversions. Another important point is to prevent implicit conversions between float32 and float64, when doing arithmetic because as I understand these have a performance cost. I have a few examples to share which are surprising to me and I will ask you to guess if they require an up conversion to float.
var a: float32 = 0
a += 1.0
var a = 2'f32
if a <= 2.0:
echo "true"
# involving consts
const a = 2.0
var b = a / 2'f32
...which by the way is more efficient than writing:
const a = 2'f32
var b = a / 2'f32
type
Vector2 = object
x, y: float32
var a = Vector2(x: 2.0, y: 1.0)
proc test(t: float32) = discard
test(2.0)
# however
var a = 2.0
test(a.float32) # requires explicit conversion
# this does a roundtrip:
var a = 2.0
var b = 1.0'f32
var c: float32 = a + b / 2
Can anything be done to make these rules more intuitive? It seems that integer literals can be coerced to any float type, without hidden gotchas but not float literals.
A bonus one:
var a = 1
var b = 2
var c: float32 = a / b
# compiles but you better write:
var d = a.float32 / b.float32
Well it's mostly a matter of avoiding the default type of float on your platform (which is float64 on a 64-bit machine). The same thing happens if you for some reason decides you don't want to just use int and instead try to make everything int16 for example.
Why do you want to use float32 and not just float by the way? It would make your life a lot easier.