Since pointers are unsafe you should be using refs. I think https://gist.github.com/gradha/a855001d1e878a07452b should work (untested). You can also look at the diff to see what I changed.
With regards to the DRY and compiler checks I believe this could be eased with macros, but it is not trivial. Hopefully somebody can provide nice helpers.
I thought could get away by exposing base objects for plugins to implement, until I read online that dispatch trees (about whose details I am not very familiar) require all types and methods to be available at compile time to work, thus not allowing arbitrary additional objects with implementations to be added by a plugin author in separate compilation units.
Er, but this is only a limitation of the current implementation. Supporting methods across DLL boundaries is not particularly hard. That doesn't require a whole new language feature!
I am not a fan of the inefficient tuple-of-closures solution as I could be storing a lot of these objects.
Shrug. So essentially you want to squeeze performance out of an inherently slow software architecture.
Gradha:
Thanks for the improvements! To add to them, I also got rid of the very unsafe cast[] and replaced it with conversion as follows.
type Circle {.final.} = object of TObject
# ...
let circle = (ref Circle)(self)
# ...
(ref TObject)(circle)
My additional concern with even attempting the already awkward macro direction is that I don't know how to put in debuginfo for sane line debugging. This is another reason I would have preferred this to be a library feature.
Araq: Er, but this is only a limitation of the current implementation. Supporting methods across DLL boundaries is not particularly hard. That doesn't require a whole new language feature!
Good to know about the first part, and I look forward to it. I disagree about the last part (if by language feature you mean interface, rather than V-tables) -- more on this below.
Araq: Shrug. So essentially you want to squeeze performance out of an inherently slow software architecture.
I don't understand the relevance of this comment.
I am open to constructive comments: V-tables are beside the point. My point is that interfaces are far more flexible and useful than strict object hierarchies, and I have multiple uses for it right now, a number of which require this working across different compilation targets. If the compiler-writers can implement this feature with dispatch trees or some other preferred way, that's fine; but I don't immediately see how I can implement this as a library writer more easily and efficiently than V-tables. And my "library" implementation is not DRY at all -- it's in actuality much less DRY if you want to have both ref circle and ref TObject (previously pointer) versions of the proc implementations. If you have suggestions that address these concerns (including the one I have above for macros), I would love to hear. But if there's not a clean generic solution, I do believe that the language is lacking an important feature.
This is entirely a subjective experience, but generating code indirectly by programmatically constructing an AST feels a bit awkward to me when constructing complex things. Of course, I agree that it comes with great power, but I appreciate the balance of readability of the template approach as much as possible to avoid the indirection.
By line-debugging, I mean using the debugger and going line-by-line. That requires debuginfo metadata with correct/logical file/line numbers associating code with binary instructions. This automatically works with templates with the nim line debugger. I haven't tried line-debugging something generated using a macro, but I don't see obviously how the compiler would determine the debuginfo automatically.
Araq said: > Shrug. So essentially you want to squeeze performance out of an inherently slow software architecture.
Hm, I don't know if this is a fair criticism, since multimethod performance and vtable performance seem to be pretty matched (or, if you are feeling critical, vtables win). The big area multimethods win in as that they don't require any changes to data structures, as they are implemented through logic switches in the code.
It would be nice to allow methods to use a vtable strategy, where the user could designate a vtable structure to use, but I'm not sure if compiler support is strictly needed.
In the end, we end up with a set of 4+ macro-behaviors that must operate correctly in the face of corner cases, as well as act in ways the user expects.
Whether or not such a system is better implemented in the compiler is debatable. The compiler would give the whole system better integration with the language, whereas the macro solution would likely feel somewhat... rough (at least, until runtime APIs and support progresses somewhat)
There might be a way to increase the performance of multimethods via type enumeration and assembly. If the each type a multimethod can be called on is enumerated, starting at 0, then a jump instruction using the enumeration could be used to to jump directly to the correct code.
everybody thinks that but this is not at all what typeclasses do. However, I'm beginning to think that maybe they should do what everybody thinks they do...
+1 for being able to define a typeclass that can be used for both run-time-resolved and compile-time-resolved polymorphism. (I have use-cases in mind, but I'm sure you can think of your own--basically any situation where you'd want to use the same concept/operations in both application-boundary and inner-loop code.)
I see typeclasses as C++ templates done right, as a constraints defining tool which will help to write libraries uncoupled from user code (but not in the clumsy C++ way). I see reduction of dependencies as the key (and only) advantage. Ideally such libraries would not need any import.
This is, however, only theoretical view, I didn't get very far here.