When i first moved to Nim it offered automatic memory management at near C speed, or something like that, IIRC. Now, it seems Nim moved from ORC to ARC, {.gcsafe.} was added, and now we're talking about a (cleaner) version of Rust's borrow checking and lifetimes, which might be good, but is not "hands/brain-free AutoMem mode". I also see the home page for nim-lang says "deterministic and customizable with destructors and move semantics, inspired by C++ and Rust", instead of "Auto memory management with almost no performance penalty.", or similar.
Is this not a goal anymore? If not, what are the tradeoffs being made here? Better compile speed and easier/more productive Nim lang dev, with safety-ensured manual memory management-enjoying Nim users still happy, and users who want AutoMem For Dummies can go elsewhere, and likely pay the performance/predictability penalty?
Thanks
but is not "hands/brain-free AutoMem mode".
There is no such mode. Not even in Python:
d = {"a": 1, "b": 2, "c": 3}
for key in d:
print("Visiting:", key)
if key == "b":
# This will cause: RuntimeError: dictionary changed size during iteration
del d["a"]
and users who want AutoMem For Dummies can go elsewhere, and likely pay the performance/predictability penalty?
Again, there is no such mode.
We want to detect "resize during iteration" and fix a couple of loopholes in our "safe" systems programming language. It's a clean evolution of the language and the mechanism does not work like in Rust, but of course, no matter how often I will say this, somebody will be afraid anyway. C# has a borrow-checking mechanism and most C# programmers do not even know. That's what I'm after, so relax.
From what I understood from Araq's reply on the topic, and the idea being taken forward in Nimony, this will be an opt-in add-on to ARC/ORC for explicit comptime safety. Basically ORC is already neat and highly efficient. The new proposal is much better than how Rust has gone about it, and is meant to prove memory safety for deterministic bare metal code, and tell the compiler that a block of code will not result in memory errors or races, which is guaranteed at compile time - useful for auditability and enforcing safety guarantees in hard real-time systems. It will not replace ARC/ORC. One can incrementally add "provable" comptime safety. That's my read on it.
Reg what's said on the website, it is basically saying Nim has better RAII and move semantics than what C++ and Rust can offer, with very high performance reference counting.
I feel pretty relaxed already :), but maybe my explanation was more confusing than helpful. I was trying to get across a general idea in a self-deprecating, mildly humorous way. The idea was "ORC with no other pragmas/new semantics = AutoMem". If it can't currently, or ever, do certain things/make certain guarantees, that may be a different topic/OK, assuming it's still capable enough to be the default, supported way to write a typical Nim program.
What prompted this post:
P.S. I would also like it if the Nim compiler would be able to catch most/many runtime errors, like your python example, during compilation, (if it can't already), or at least issue a special "possible run time error" warning, but maybe that's not possible without new semantics, IDK.
Thanks
Nerd-sniped me again...
Yeah, it would be nice if "no mm flag specified" used ORC methods if threads are not used, and used atomicArc + YRC if threads are used, for instance. --mm:auto as default. re. #3. sounds good. whatever it takes. I just always ask "why can't the compiler figure this out without me!" when i'm asked to do anything (lol), especially telling the compiler about mm in code, which sometimes i'm not sure about.
While mostly over my head, YRC looks cool! I talked with The AI about YRC and other mm stuff to understand all of this better. Thanks for your responses and your work. Looking forward to Nim 3.
influenced via type annotations,
indeed, this remains the most desired option - ie not a "global" flag or memory manager that forces atomics on every ref operation but rather something that can be controlled locally .. the vast majority of ref operations in efficient threaded programs do not require atomics - in stead, you either share read-only data (no ref ops) or transfer a singly owned (graph of objects) to another thread that becomes the owner..
shared mutable access that changes (a graph of) objects typically requires synchronization outside of the refcount itself so it shouldn't be targeted by the runtime really.
those of us writing for performance and threading really don't want forced atomics (even lock free) sprinkled throughout the program.
those of us writing for performance and threading really don't want forced atomics (even lock free) sprinkled throughout the program.
I get that forcing refs to be atomic will degrade the performance of existing single-threaded code for no good reason. But if refs ops are used at the scale this degradation is palpable, the code is not written well for performance anyway, and if performance is of import, rewrite is needed either way.
If you're serious about performance, you shouldn't be using refs anyway (or be very, very sparing with them).
That argument, unfortunately, always cuts both ways: If you are happy with ref you're happy not thinking too much about ownership and threading and so ref should remain thread-local, enforced by the compiler. The others can use UniquePtr/SharedPtr.
I agree, and I'd be content with such choice of ref handling too. I think
thread-local, enforced by the compiler`
is important, as I've seen quite a few posts on this forum where people try to interact with a ref from another thread, get a segfault and become very confused (been there myself too).