Citing @Araq:
On the other hand, in Nim's future I would like to replace the VM by native code.
See Context.
It would interest me what the reason for this undertaking are, isn't a register VM state of the art for compile time evaluation. I've heard D is striving for it and Jai similarly has something akin to.
The way I'd do this is https://github.com/timotheecour/Nim/issues/598 which would allow user code to be registered as vmops, which run natively:
proc fn(a: int): int {.vmhook.} =
# this will be compiled as machine code
# and run like a vmops
...
the 2nd ingredient is cling, which allows incremental C/C++ compilation based on clang (refs https://github.com/timotheecour/Nim/issues/705) which provides fast compile times (suitable for JIT) and high performance code generation (close to what you'd get from compiling via clang directly)
It would interest me what the reasons for this undertaking are, isn't a register VM state of the art for compile time evaluation.
The register based VM isn't bad because its performance is not JIT-like (though it could easily be faster...), it's bad because it is buggy and its remaining bugs are very expensive to fix. It's a subsystem of its own, trying to emulate precisely what Nim's native backends can do and it needs to support symbolic evaluation and at the same time allow low level bit twiddling.
I consider the VM's performance to be good enough and its impact on compile-times is not important -- as IC matures, the VM's results are cached perfectly.
I consider the VM's performance to be good enough
for most cases maybe, but a 20_000 X slowdown for some tasks (eg https://github.com/nitely/nim-regex/issues/104) justifies the need for {.vmhook.} (aka user defined vmops).
That said, if at least some code needs the VM then the VM still needs to emulate all/most of native C backend semantics, so a question is whether the VM can be entirely replaced by a JIT, and whether this would be slower in the end (e.g. if the cost of compiling to machine code > cost of running VM code, for CT code that doesn't involve a lot of loops)
Another way: an actual microVM like firecracker running it's own instance of the Nim compiler.
pros:
cons:
If enough build systems match the short compatibility list, this might be worth a try.
That said, if at least some code needs the VM then the VM still needs to be feature complete (ie emulate all/most of native C backend semantics), so a question is whether the VM can be entirely replaced by a JIT, and whether this would be slower in the end
A JIT has the same problem as any VM that operates directly on packed data -- it's unclear how it can work with NimNode's effectively and details like const x = [procA, procB, procC] must continue to work. In other words, if you translate proc names into machine addresses you need a way to turn these addresses back into Nim symbols (PSym).