I have implemented an alternative Colors library for Arturo, roughly following the path of std/colors, but with support for an alpha channel + conversion helpers (HSV/HSV <-> RGB) + palette generation functions.
Everything works fine, except when I try to build the project for js.
My VColor type is defined like this:
type
VColor* = distinct int
and then I initiate a complete list of colors as constants.
This is where the compiler complains:
const
clAcidGreen* = VColor(0xB0BF1AFF) ;; Error: 4042850303 can't be converted to VColor
clAlgaeGreen* = VColor(0x64E986FF)
clAliceBlue* = VColor(0xF0F8FFFF)
If I said clAcidGreen to VColor(0xB0BF1A) then the next error occurs with clAliceBlue.
What is going on? What am I missing? (I guess it has to do with the bit range of int's for JS?)
P.S. The complete code is here: https://github.com/arturo-lang/arturo/blob/master/src/helpers/colors.nim - if you think this whole library makes any sense for Nim, I could obviously create a PR to incorporate some of its methods in Nim's stdlib
OK, so, after posting the question (as usual lol), I tried to be more specific about what kind of int this VColor easy. And since we just need 32 bits (4 components x 8 bits), I went for uint32.
I guess... problem solved. :)
(I still don't understand what an int ends up standing for in JS though. The documentation states that an int is "bitwidth depends on architecture, but is always the same as a pointer.". So I"m guessing that either a JS int is not 32-bit at all, which I doubt, or it's not an unsigned 32-bit int?)
Basically JS itself autocasts numbers (normally 64 bit floats) to 32 bit integers when using integer bitwise operations. This is why int is set to be int32 in JS, and 0xB0BF1AFF is too big to be an int32.
If you need 64 bit integers in JS, there are a few approaches to this, such as using JS bigints, or implementing your own 64 bit integer type based on 2 int32s or some kind of TypedArray. Note that using int64 will not cause any compilation errors as is, however high values will have float semantics and some code may be broken.
Numbers are a bit different in JavaScript. Essentially all numbers are float64, but for performance reasons integers are often stored internally as int32. I'm guessing that Nim defines int as int32 for the JavaScript target and then tries to make sure that it never triggers anything that would make the JavaScript interpreter having to upcast it to a float.
Either way it's definitely NOT an unsigned 32-bit int