Genuinely interested about the design decision for nimscript.
Why is it interpreted? Is it not possible to compile the nimscript to a local binary and then execute, or JIT compile it? Or is this too complex to implement for little benefit?
So theoretically nimscript could be a JIT, which would be pretty cool tbh, I think there's some work in the llvm backend to do that using LLVM's JIT infrastructure.
Compiling to a separate program and executing, or a dll then loading the dll is possible, it's what rust does. This has some problems though, for one the behavior of stuff like floats will probably be as if you are running on the host platform, not the target platform (nimscript afaik works like this anyway). Performance will also probably be pretty bad on windows, especially for the "compile to binary and execute" model, because windows defender may try and scan the binary (and CreateProcess is pretty slow in the first place).
So theoretically nimscript could be a JIT, which would be pretty cool tbh, I think there's some work in the llvm backend to do that using LLVM's JIT infrastructure.
Currently the Nim VM and Nimscript are fast as Python according to @Araq, and Python is already fast enough for many purposes. I think maintainability and debuggability significantly trumps JIT advantages.
Where compile-time evaluation is slow is for the type system (semantic checks, sigmatch), compute is decent, I tested it with compile-time bigint arithmetic https://github.com/mratsim/constantine/blob/0944454/constantine/math/config/precompute.nim#L489-L525
Nim's VM was designed to evaluate the NimNode structures effectively, it was designed for Nim's macro system. The NimScript thing came much later. Back then I did look into using Lua instead of a custom VM but the NimNode thing is much harder to do with stock interpreter technology or JITs for that matter.
That said, I now think there is an inherently better design but I don't want to spoil it because it'll be covered in my next book. Eventually.