It seems to me like we can completely emulate object variants with static enums.
Furthermore, I would guess that it's more memory-efficient since the "kind" is used at compile-time only.
What are the differences, limitations, tradeoffs between both versions of the following?
type
NodeKind = enum # the different node types
nkInt, # a leaf with an integer value
nkFloat # a leaf with a float value
# Version 1 with object variant
Node = ref NodeObj
NodeObj = object
case kind: NodeKind # the ``kind`` field is the discriminator
of nkInt: intVal: int
of nkFloat: floatVal: float
# Version 2 with conditional fields
Node2[NK] = ref NodeObj2[NK]
NodeObj2[NK: static[NodeKind]] = object
when NK == nkInt:
intVal: int
elif NK == nkFloat:
floatVal: float
You can store variables of type Node all in one single seq.
For node2: I really think that there is no way to store variables of type Node2[nkInt] and Node2[nkFloat] in the same seq.
So Node 1 is more dynamic. But I am not really sure, have never used object variants...
Yes, Stefan_Salewski is right:
type
NodeKind = enum # the different node types
nkInt, # a leaf with an integer value
nkFloat # a leaf with a float value
# Version 1 with object variant
Node = ref NodeObj
NodeObj = object
case kind: NodeKind # the ``kind`` field is the discriminator
of nkInt: intVal: int
of nkFloat: floatVal: float
# Version 2 with conditional fields
Node2[NK] = ref NodeObj2[NK]
NodeObj2[NK: static[NodeKind]] = object
when NK == nkInt:
intVal: int
elif NK == nkFloat:
floatVal: float
# Compiles fine
let a = @[NodeObj(kind: nkInt), NodeObj(kind: nkFloat)]
# Doesn't compile
let b = @[NodeObj2[nkInt](), NodeObj2[nkFloat]()]
Also this (changes type at runtime):
type
NodeKind = enum
nkInt, nkFloat
Node = ref NodeObj
NodeObj = object
case kind: NodeKind
of nkInt: intVal: int
of nkFloat: floatVal: float
let a: Node = Node(kind: nkInt, intVal: 2)
echo a.intVal
a.kind = nkFloat
a.floatVal = 2.0
echo a.floatVal
@Udiknedormin don't object variant use fat pointer anyway?
In any case, I tried version 2 and it makes lots of errors very hard to check.
See issue #6331
In this code the type signature should be
template shape*[B,T](t: Tensor[B,T])
instead of
template shape*(t: Tensor)
for proper compilation.
type
Backend* = enum
Cpu,
Cuda
Tensor*[B: static[Backend]; T] = object
shape: seq[int]
strides: seq[int]
offset: int
when B == Backend.Cpu:
data: seq[T]
else:
data_ptr: ptr T
template shape*(t: Tensor): seq[int] =
t.shape
Unfortunately the error message points to
when B ==
instead of the template line which is a pain when refactoring a huge codebase. @mratsim No, as far as I know variant types are tagged unions. Their minimal size is size of the biggest variant + size of kind but usually they're a bit bigger for alignment reasons. For example when sizeof(kind) == 1 and sizeof(biggestVariant) == 8 it usually results in sizeof(whole) == 16. But that is regardless of how many variant are neeeded.
I'm afraid you just encountered one of the "surprises" static has to offer. There are more with distinct and a few other "cool" features. I'm afraid you have to get used to the fact things just do not work properly, just like I did. :(
For now, feel free to use this static-enum workaround:
type
BackendCpu* = object
BackendCuda* = object
Backend* = concept c
c is BackendCpu or c is BackendCuda
Tensor*[B: Backend; T] = object
shape: seq[int]
strides: seq[int]
offset: int
when B is BackendCpu:
data: seq[T]
else:
data_ptr: ptr T
The only difference is that you can't EVER get non-static Backend value (without overhead of vtable, that is).
In the end, I created two different types Tensor and CudaTensor.
I believe it will provide the best extensibility and ease of use, right now and in the future (adding new fields to track GPU device for example).
Now I just pray for VTable to land ASAP so I can have a collection of floats, tensors and CudaTensors.
@LeuGim True. ^^" Initially I was trying to use a compile-time container with testing whether it contains a certain type, that's why I wrote it stayed this bizzare way...
By the way: it may make a difference when vtable pointers get into the game. I'm not sure whether or-types would have them too. :-/