It has been announced a while back that (here), that with the appearance of orc and arc, most of the other memory management options will eventually be deprecated and removed, personally i am mostly fine with this, since there is no difference between how something is written using refc, and using arc/orc.
However there are gc:none and gc:regions, which are pretty unique and IMO can be useful sometimes.
Personally i can understand that due to the inability of gc:none to free the allocated memory and rendering most of the stdlib useless, was removed as a memory management option, its not very friendly and forces you to write rather ugly code that relies a lot in casting, etc. On the other hand, regions does offer a cleaner and less ugly syntax that allows you to allocate and free memory manually in a rather nimish way (imo) + it can make use of the stdlib.
Regions has never been a popular option, maybe because its mentioned almost nowhere and its only documentation is its source code, and even though arc is better fitted for most usecases, regions can be useful for those situations where you need every bit of performance you can achieve (without sacrificing 80% of the stdlib like gc:none forces you to do).
To sum up, id like the idea of removing --gc:none and specially --gc:regions to be reconsidered and i would also like to hear other reasons for and against this decision.
Well the most expensive to maintain GC mode is --gc:refc but the RFC is about unifying the ecosystem and not so much about cutting maintenance costs for us.
That said, neither --gc:none nor --gc:regions are convincing switches for me, I see them as failed experiments and ARC as the winner. We can support them for the next 20 years if you really need them, but IMO it's fair to first demand good arguments for keeping them.
On Reddit someone posted this comment which asks about controlling when, say, a large nested object gets destroyed. With {a,o}rc it seems we're giving up the potential for this fine grained control, is this a concern? How could this be mitigated?
i.e.
proc fast_please(n:int):int=
let x = newBigTree(n)
result = doSomething(x)
would inject an expensive =destroy which we might prefer to defer until some other point in the code.
I suppose we need to, as always, be conscious of what's going on behind the scenes, and structure the code so those trees won't get destroyed, maybe
proc fast_please(n:int):(int,Isolated[BigTree])
or however that "lifetime annotation" worksbut IMO it's fair to first demand good arguments for keeping them.
its totally fair, i feel like its not very clear in my first message so i will enumerate the ones i feel like they are the most important ones: (As i said, i am mostly concerned about regions and think "gc:none" has some strong reasons to be deprecated, but some of this arguments are also valid for it).
(ill add probably add more later, i cant right now)
It is the most performant option, and no matter how fast arc/orc get, they will still be slower
I haven't seen this in benchmarks. Got one to share where it's true?
How could this be mitigated?
Give typeof(newBigTree(n)) a custom destructor or move x to a different thread / container.
adds debug checks so that you don't create invalid cross-region pointers...
Sounds like a return for the long gone memory regions feature? I didn't follow Nim closely at the time to know why it was phased out, though.
In fact, this thing looks like a poor man's copying GC, I have the feeling that a copying GC with "region and collection hints" would do just as well for benchmarks while being much simpler to use (hints do not affect memory safety) and to scale for bigger programs.
I was actually wondering about the "memory regions" feature in the type system:
type P = (ptr object) ptr Region
AFAICT it was removed because no one found it useful?
--gc:regions was "phased out". Or better put: "wasn't developed further".
I still rely on --gc:regions for certain things, where neither default gc nor --gc:arc/orc work