I remember watching a presentation for Araq, maybe it was pre-covid, where he compared the performance of creating a tree manually vs automatically, where manually was twice as fast as automatic. I was curious how that gap in performance might have changed since then.
I tried searching for the source code of the benchmark but couldn't find it.
The purpose of the question is that I understand that, in recent discussions here, one should use ARC unless absolutely sure that it isn't enough.
Does that mean that the runtime got much since then, or is this data structure an Achille's heel for automatic memory management?
Benchmarks can be found here: https://github.com/Araq/fosdem2020
My educated guess (which could be totally wrong!) is that the numbers didn't change much since then.
Regardless of the numbers my advice is always the same: Use custom containers:
fwiw, Here are my results on nim 2.2.2 windows 11 23h2 on i7-4790k -d:danger, threads, orc, using 21 as the argument as per the readme
using hyperfine 10 runs
manual: 38.532 s ± 0.374 s pools: 16.484 s ± 0.331 s gcs: SIGSEGV after the first line of output
You need to remove the line
`=destroy`(tmp)
For the GC tests. (This code was written before we had scope based destruction and is not correct anymore with an explicit destroy.)
here is gcs again without danger
stretch tree of depth 22 check:8388607
Traceback (most recent call last)
~\Desktop\nim\tree\fosdem2020\bintrees_gcs.nim(39) bintrees_gcs
~\Desktop\nim\tree\fosdem2020\bintrees_gcs.nim(33) main
~\.choosenim\toolchains\nim-2.2.2\lib\system\arc.nim(188) nimRawDispose
~\.choosenim\toolchains\nim-2.2.2\lib\system\alloc.nim(1165) dealloc
~\.choosenim\toolchains\nim-2.2.2\lib\system\alloc.nim(1052) rawDealloc
~\.choosenim\toolchains\nim-2.2.2\lib\system\alloc.nim(815) addToSharedFreeList