I am still relatively new to doing this even in c.
But since i am learning nim i don't know the way to correctly put a pointer into a pointer using cast in nim.
So here is what i'm working on.
import ../obj_parser, streams
type
BinWriter* = ref object of RootObj
obj: ref obj_data
proc readObjFile*(self:BinWriter,name:string) =
self.obj = getObjFile(name)
proc createBinary*(self:BinWriter,filename:string) =
var s = newFileStream(filename,fmwrite);
if s != nil:
var total : int = 0
total += (self.obj.vert.len * sizeof(float32))
total += (self.obj.face.len * sizeof(uint32))
total += (self.obj.nrml.len * sizeof(float32))
total += (self.obj.tex.len * sizeof(float32))
var all : pointer = nil
var vert = alloc(self.obj.vert.len * sizeof(float32))
for i in 0 .. <self.obj.vert.len:
vert = cast[pointer](cast[int](self.obj.vert) + i)
echo("sizeof vert: ",repr(vert))
var face = alloc(self.obj.face.len * sizeof(uint32))
for i in 0 .. <self.obj.face.len:
face = cast[pointer](cast[int](self.obj.face) + i)
echo("sizeof face: ",repr(face))
var nrml = alloc(self.obj.nrml.len * sizeof(float32))
for i in 0 .. <self.obj.nrml.len:
nrml = cast[pointer](cast[int](self.obj.nrml) + i)
echo("sizeof nrml: ",repr(nrml))
var tex = alloc(self.obj.tex.len * sizeof(float32))
for i in 0 .. <self.obj.tex.len:
tex = cast[pointer](cast[int](self.obj.tex) + i)
echo("sizeof tex: ",repr(tex))
#all = cast[all](all + vert + face + nrml + tex) dont know what to do here???
dealloc(tex)
dealloc(nrml)
dealloc(face)
dealloc(vert)
#dealloc(all)
var binary = BinWriter()
binary.readObjFile("../u.obj")
binary.createBinary("u.bin")
so i checked around but saw no explanation or example of how to do this but i'll keep looking.
edit i noticed i have errors in this but since i'm confused about cast into a cast i'll leave it. so something like this does not make sense to me so far. and according to nimsuggest it is an error. still trying to understand it better.
all = cast[ptr float32](cast[int](all) + vert))
The pointer type doesn't have a + proc out-of-the-box (to prevent common pointer arithmetic mistakes, iirc). So you need to cast each pointer to an int, do the arithmetic on those, then cast the result back into a pointer.
Off the top of my head..
all = cast[pointer](
cast[int](vert) +
cast[int](face) +
cast[int](nrml) +
cast[int](tex)
)
You could define a + operator for pointers yourself to make this easier if you use it a lot. Also, you should consider using create & resize as alternatives to alloc & dealloc. Same thing, but they're type-safe and result in cleaner code.
All that said, using an {.unchecked.} array might make your life easier in general.. depending how much you're accessing individual items in these arrays. You can see an example of it here
this what i came up with for the + operator
proc `+`(a:pointer,p:pointer): pointer =
result = cast[pointer](cast[int](a) + 1 * sizeof(p))
I believe a person has to times the sizeof p by 1. But i could make it use generics too i guess but i dunno.
And thank you filwit for your post it was perfect.
i will look into create and resize the unchecked pragma.
You must not using signed integers in pointer arithmetic, just because memory address can't be negative and can be higher, then 0x8000_0000 for 32 bit platforms and 0x8000_0000_0000_0000 for 64 bit platforms. So
var newp = cast[pointer](cast[uint](oldp) + 1u)
I fail to see what the "all" should be? If I add 4 random memory addresses I just get a meaningless value. But thats probably the error you talk about.
You need to add the size of the type the pointer points at. Thats the reason you can't create an "+" like function without more information. Adding the size of the pointer type only will work for pointers to pointers and types which happen to have the same size as the pointer type.
But you have a type like var a: ptr int16 and for this you can find the size of the elements with a[].sizeof. Therefor you can create the plus and minus functions for them.
var a: ptr int16
var t = @[1.int16, 2.int16, 3.int16]
proc `+`[T](a: ptr T, b: int): ptr T =
if b >= 0:
cast[ptr T](cast[uint](a) + cast[uint](b * a[].sizeof))
else:
cast[ptr T](cast[uint](a) - cast[uint](-1 * b * a[].sizeof))
template `-`[T](a: ptr T, b: int): ptr T = `+`(a, -b)
a = t[0].addr
echo a[]
a = a + 1
echo a[]
a = a + 1
echo a[]
a = a + -1
echo a[]
a = a - 1
echo a[]
There may be more elegant solutions for that code. This was the first example which I had in mind. And there are unchecked arrays too.
How to substract two pointers like in C, to get the distance between them in some array?
Concider this example
byte *filebase, *file_p, *file_end;
byte dlightdata[MAX_MAP_LIGHTING];
byte *GetFileSpace (int size)
{
byte *buf;
file_p = (byte *)(((long)file_p + 3)&~3);
buf = file_p;
file_p += size;
//intcount += (size + 3) & ~3; // fail attempt, the size is file_p+3
if (file_p > file_end)
Error ("GetFileSpace: overrun");
printf("BUF:: %p\n", buf);
return buf;
}
// . . . LightThread() . . .
out = GetFileSpace (lightmapsize);
f->lightofs = out - filebase; // int f->lightofs;
// . . .
void LightWorld (void)
{
file_p = dlightdata;
filebase = file_p;
file_end = filebase + MAX_MAP_LIGHTING;
LightThread();
lightdatasize = file_p - filebase;
}
You can cast the address of your array elements to int and do subtraction to get byte distance, and after division by sizeof(elementType) you have index distance.
I think the posts of other people explained that in more detail. See also the post of Mr Behrends in http://forum.nim-lang.org/t/1188#7366