Initial problem is:
# Error: type mismatch: got (Array constructor[0..3, int]) but expected 'array[0..3, byte]'
const data: array[4,byte] = [0xFD, 0x10, 0x20, 0x30]
Because typing 'u8 or 'u32 or similar is no fun for large const arrays, I thought let's use some sort of automation. Enter first option, template:
template seqOf[T, N] (buff: openarray[T]): untyped =
map[T, N](buff, proc (x: T): N = x.N)
const dataSeq = seqOf[int,byte]([0xEF, 0x7F, 0x80, 0xFF, 0x10A])
Everything works fine (with proper data) and as bonus I get the error when overflowing with 0x10A.
But then I read some posts saying the const seq are not so good , use const arrays instead (shouldn`t make a difference for my case as using them as global variables anyway). Enter second option, macro:
macro arrayOfU8(data: openArray[int]): untyped =
result = newNimNode(nnkStmtList)
var arrayStmt = newNimNode(nnkBracket)
for i in 0 .. <data.len:
var u8Node = newNimNode(nnkUInt8Lit)
u8Node.intVal = intVal(data[i])
arrayStmt.add(u8Node)
result.add(arrayStmt)
const dataArray = [0xEF, 0x7F, 0x80, 0xFF, 0x10A].arrayOfU8()
So I got my array declared but the 0x10A yields no error and instead gets truncated to the lesser byte 0x0A. While I care not so much for having the error triggered, it would have been nice though to catch early errors as in the seq case.
Is there any other approach to my initial problem? Thank you.
You can write:
const data = [0xFD.byte, 0x10, 0x20, 0x30]
or
const data = [0xFD.uint32, 0x10, 0x20, 0x30]
It will inherit size and type automatically.
I was just considering this situation last night when playing with opengl.
OderWat - that's the solution I'm using, but it does get a bit awkward sometimes:
import opengl
const x: array[4, GLVectorf2] =
[
[1.0.GLFloat, 1.0],
[1.0.GLFloat, 1.0],
[1.0.GLFloat, 1.0],
[1.0.GLFloat, 1.0]
]
I wondered if I could use a converter to automatically downgrade float64 to GLfloat but this didn't change anything (maybe I'm using it wrong?):
import opengl
converter toGLFloat(i: float): GLfloat = i.GLfloat
const x: array[4, GLVectorf2] =
[ # got array constructor for float64 but expected GLfloat
[1.0, 1.0],
[1.0, 1.0],
[1.0, 1.0],
[1.0, 1.0]
]
@coffeepot your example with the converter does not work, because of the way type inference works. It works from inside out.
It starts here: 1.0 and says, thats a float (assume equivalent to float64). Then it continues with the array [1.0, ...]. All further arguments to the array constructor are assumed to have the same type as the first one. If not the compiler looks if there is a conversion available. then this [[1.0,1.0], ...] is an array of arrays of type float64. That one has not the same type as expected array[4,GLVector2f]. So the compilre looks for a converter function from array[4, array[2, float64]] to array[4, GLVectorf2]. If you would provide that it would work, but only this case, so I don't recommend doing it.
Thanks for your reply Krux02, that makes a lot of sense. It might not seem like a good idea to have loads of converters for opengl types (GLVectorf2, GLVectorf3 etc), but it is quite tempting!
In a way, it's kind of good that you have to write a converter for the whole sub array, as it seems a bit dodgy automatically converting float64->float32 without warning.