Sorry, I don't know why hex don't echo error.
type IntType = range[0..1024]
var v2:IntType = 0x1024 # error
var v:int = 0xffffffffffffffffabcd'i8 #OK, can't check range
var v1 = 0xffffffff_ffffabcd #OK, can't check too
var v2 = 1111111111111111111111111111111111111111111 #error
var a :int64 = 9223372036854775808 #Error: number 9223372036854775808 out of valid range var b:int64 = 0xFFFFFFFF_FFFFFFFF_1111 #no error, why?
Well, Maybe you can copy them save to nim file and compile it.
var v:int = 0xffffffffffffffffabcd'i8
var v1 = 0xffffffff_ffffffff_ffffabcd
var v2 = 1111111111111111111111111111111111111111111
@Angluca: It's not about whether I can save it to a file and compile it; I asked because your post was extremely vague. You need to be precise about what you were expecting and how that expectations didn't match the actual result.
For example, 0x1024 is a hexadecimal number that is greater than the decimal number 1024. So, obviously, it does not fit in the range 0..1024. And there's an error, so I'm not even sure what your error is.
The one example that seems to exhibit a bug is 0xffffffffffffffffabcd'i8 not raising an error; we'll get back to that in a minute.
@nimluckybull: Both are correct. Decimal constants are required to match the range between T.low and T.high. Hexadecimal, octal, and binary constants, however, can have leading one bits so that you can write negative numbers as bit patterns. Hence, 0xffff'i16 == -1'i16 (for example).
The one bug that I am seeing is that 0xffffffffffffffffabcd'i8 should be an invalid constant (but it would be correct with 'i16 instead).