From what I understand, both Nim and Pony do not have stop the world GCs (generalizing the multiple Nim GC options), however Pony's GC is a lot more conservative in what gets collected, which works better in concurrency heavy code like it's meant for. That doesn't mean that Nim doesn't account for concurrent code in its memory management schemes though. Both languages seem to share having good performance in their memory management, at least based on Pony's claim. Nim also allows globals and whatnot that are disallowed by Pony's design and the garbage collection has to deal with them as well.
More about Nim memory management and GC options here. ORC is planned to become the default over refc in the future, there are plenty of articles on the Nim blog etc that go into detail about ARC/ORC and their benefits.
Nim has multiple different GCs but I refer to --gc:orc here only:
Orc's focus is on latency and memory consumption and interop with C/C++ and custom memory management, it frees memory "immediately" since it's based on reference counting. The GC is thread-local but the memory allocator is not so sending an object to a different thread can be achieved without copy operations.
Pony's GC was designed for Pony's actor system, it is "actor-local" but afaik the memory allocator is shared so sending an object to a different thread can be achieved without copy operations. Pony's ORCA updates reference counts via message passing and is much more complex. I'm personally not convinced this complexity is worth it, if it simply used atomic reference counts instead it could use the hardware's message passing subsystem directly. I've never benchmarked it though. Apart from that, it seems very well designed. Since reclamation is not "immediate", it's very likely that it uses more memory (factor of 2) than a pure (atomic or not) RC solution would; the memory overhead also seems to depend on the number of GC messages that actors have to process. https://www.imperial.ac.uk/media/imperial-college/faculty-of-engineering/computing/public/1718-ug-projects/Daniel-Slocombe-Reliable-Garbage-Collection-in-Pony.pdf contains no information about the memory overhead.
My guess: ORCA's throughput is better than Orc's, latency might be better or worse and memory consumption is worse.
In summary: Both are awesome, somebody needs to run some benchmarks. (And benchmarking is hard.) :-)
@araq, what do you mean by "immediate"? I went through the memory management page (https://nim-lang.org/docs/gc.html) shared by @hlaaftana and it did not really answer that question. The ARC section says: "deterministic performance for hard realtime systems", but does not explain what "deterministic" means. Is it like C++ with unique and shared pointers? When does the deallocation happen? _Exactly at the moment that the last reference gets out of scope (e.g. when a function returns)? Is ARC able to remove the RC overhead when the compiler can tell that a variable is referenced only once (e.g. when it is a local variable)? Those are the kinds of details I'm interested about when deciding if I could use ARC for an embedded application for example.
BTW, I think that document would benefit from reordering the list of memory management options from most recommended to least recommended. Maybe it should keep the current default option first for now, but after that it should list orc, arc and then maybe none, go, boehm, etc. It should also clearly indicate the ones that are no longer recommended and might even be deprecated in the future...
Additionally, the table that compares all the options should also be reordered, and I would suggest adding a "Recommended for" column that clearly spells out when it should be used (and in a few of the cases probably just say that it should _not ever be used).
I would also rename it from gc.html to memory_management.html or something like that (I really think that de-emphasizing the words "garbage collection" is important for the kind of people that are likely to read that particular document).
Is it like C++ with unique and shared pointers? When does the deallocation happen? _Exactly at the moment that the last reference gets out of scope (e.g. when a function returns)?
With --gc:arc, yes, exactly at that moment. With --gc:orc too if the type is "acyclic".
Is ARC able to remove the RC overhead when the compiler can tell that a variable is referenced only once (e.g. when it is a local variable)?
Pretty much yes, RC operations are elided. You can always ask the compiler via --expandArc:functionName of what it did to your code. The compiler does not turn heap allocations into stack allocations, however. (This might not be desirable anyway given that stack space is often limited.)
I reiterate BTW, I think that document would benefit from reordering the list of memory management options from most recommended to least recommended. Maybe it should keep the current default option first for now, but after that it should list orc, arc and then maybe none, go, boehm, etc. It should also clearly indicate the ones that are no longer recommended and might even be deprecated in the future...
Additionally, the table that compares all the options should also be reordered, and I would suggest adding a "Recommended for" column that clearly spells out when it should be used (and in a few of the cases probably just say that it should _not ever be used)
Pony
interesting language , it is so clean shame it is not under active development.