Hi,
what is the granularity of the GC actually?
I had some structures and was thinking, if it makes any sense at all to reduce the size of these structures if the granularity of GC is higher.
Another thing I was thinking about if it is possible, that the GC takes only the significant bits of a ref into account. Then it would be possible to use the other bits to encode some information. I know this is hacky, but currently I have the problem with the implementation of the neko VM that the original code exactly does this (it uses bit 0 to indicate an integer value and can store up to 31 bit/ 63bit without the need of extra memory allocation). I tried a myriad of alternative solutions and the best I have come up until now is to use a different stack for each value type and a different accumulator for each type too, but this gets quite complex...
to 1.) I meant memory granularity not time granularity
to 2.) I must out myself as being ignorant to certain things - I never used IRC or something like that, so I have to first ask my kids howto (oh no - this will be a total loss of my face). But thanks yes - I will try to make the GC forgiving
Another thing I was thinking about if it is possible, that the GC takes only the significant bits of a ref into account. Then it would be possible to use the other bits to encode some information.
BTW the new thing is using float64 as the underlying object representation and encoding pointers and integers as special NaN values as current CPUs only produce a single NaN bit pattern; all the other bit patterns representing NaN are free to abuse. Luajit does this for instance. Of course this is even more work to implement and to make it play nice with Nimrod's GC.