You should use -d:release instead of --opt:speed. Where is your code so we can see what exactly is slow? If you're using my bigints, they're well known to be slower than gmp: https://github.com/cowboy-coders/nim-gmp
Nim is much slower that C++, D, Rust, and Java.
I would really like to see your comparisons, so I can avoid that cases in my own code. As for what I have seen, performance of these languages is generally very similar, with the exception of of few very special corner cases. And my own tests show similar results. (of course, for Java memory consumption and startup time is generally a weak point.)
Some weeks ago you asked about GTK3. Have you been able to do first tests with my wrappers? Let me know is somethings works not as expected. (For varargs you may need the latest devel version of Nim, Araq recently tried to fix it. In a few days GIO module should be available also.)
Using -d:release certainly makes things a lot, lot better than --opt:speed. I will have to try an analyse why I was drawn to what was clearly the wrong thing to do.
Stefan, I haven't been able to tinker with Nim and GTK+3 since I asked: stuff came up, I had to do work to earn money rather than continue having fun. I have to admit that the fact that I cannot do a "koch install $HOME/Built" is increasingly annoying as I am trying to use the same filestore from both Debian Sid and Fedora Rawhide.
So optimizing for speed doesn't give you speed, that seems a bit like violating Sale of Good Act ;-)
The version to beat is generally C++ with TBB.
I think I learned very early that -d:release should be used for maximal performance, turning of all checks. But it seems that indeed some people miss that fact still. Personally I was not very happy with the term "-d:release" -- it is not very obvious that this means maximal performance, and not in all cases one would deactivate all checks for a release version. --opt:speed relates to --opt:size, I guess that translates to -Os or -O3 for gcc. (My early test with parallel processing where not that nice -- for code calculation the convex hull in 2D the code with Parallel/Spawn statements was slower, but that was a bug related to unnecessary data copy. It may be fixed already, and I understand that Nim is still work in progress...)
Generally -- I have seen benchmarks where indeed C++ or D are some percent faster than Rust or Nim. But that really should be no surprise, there has gone much more manpower into C++ and D already. And finally, C backend may make a difference. For my tests speed difference was not that great, but size difference was a factor of two for clang vs gcc.
def: If you're using my bigints, they're well known to be slower than gmp
Curious if the worse efficiency here is related to GC overhead?
Curious if the worse efficiency here is related to GC overhead?
I reduced the GC overhead as much as I could with some tricks like TR macros. From what I can remember the reason for the slowness is that I implemented slower algorithms than GMP and have no assembler versions of the hotspots.
Might be a nice idea to make a high level GMP wrapper.
Russel: So optimizing for speed doesn't give you speed, that seems a bit like violating Sale of Good Act
It does optimize for speed. However, it does not eliminate safety checks or remove debugging information. That is a separate option because "I am willing to be exposed to buffer overflows" or "I do not need meaningful stack traces in case of an error" is something different from "please make this code as fast as you can given the other constraints I have specified".
def: From what I can remember the reason for the slowness is that I implemented slower algorithms than GMP and have no assembler versions of the hotspots.
That's pretty much it. It's not just the hand-written assembly, there's a ton of very much non-trivial math that goes into the GMP/MPIR algorithms.
Source: I share an office with one of the MPIR developers.