I'm curious how safe Nim is.
I know that Nim supports both Automatic memory management and pointers and I wonder how safe both are compared to Zig and Rust.
As far as I know AMM should be safe except for concurrency, right since only borrow checker can prevent concurrency bugs?
And how safe is Nim with AMM disabled with pointers? Is it no safer than C or C++?
Note that I've never done pointers in my life. But still curious about security.
I think Nim should go forward adding safety features for pointer management like compile-time checks and runtime checks like Zig and after that start branding itself as a safe language. This will surely make it more popular and also there are very few low-level safe languages
Heyho, I appreciate you joining the debate on general feature design and more.
However, for specific topics that seem very likely to have been discussed beforehand, such as Exceptions etc. it would generally be advised to just search a bit beforehand to look at the rationality for the design.
Otherwise you're prone to be the 50th person that brings up the point, unaware of the prior discussions around it, which may be grating on the patience of others that are likely to respond to this kind of topic.
By security & security I mean avoid the usual C and C++ exploits. Both Zig and Rust make strides to make it harder for dev to shoot oneself in the foot
Also modern language design features that make it harder to write bugs.
Nim is memory safe. Nim also has a borrow checker. Its exceptions are statically tracked making them on par with almighty Rust.
Feel free to read our documentation and maybe even real books.
If you use Nim with its default memory management system, you get Rust-style borrow checking with little effort, and you also get cycle collection which Rust doesn't have, meaning you're less likely to leak memory as a novice. If you stick to managed ref types instead of raw pointers and don't do concurrency, Nim is as safe as Rust.
If you throw concurrency in the mix without properly synchronizing things, you can expect crashes. I don't see Nim as a "fearless concurrency" language.
And how safe is Nim with AMM disabled with pointers? Is it no safer than C or C++?
You're going to leak if you use the stdlib with memory management disabled since it isn't designed to be used without any management, but you can use ARC as your memory manager, which is fully static and doesn't have any special runtime cost and works basically just like Rust's. So you don't really disable memory management to begin with, it's a bad idea.
More close to your implied question is yes, Nim has few safety advantages over C if you use only unmanaged pointers, although it's type system still keeps you safer than C because you can use generics instead of void*. You still have the freedom to use untyped pointers if you want, though.
Nim has no unsafe keyword, so you can introduce unsafe features at any point without signalling to the compiler that you're doing so. Unsafe features being ptr types (as opposed to ref types), the emit pragma (used for outputting C and asm), and UncheckedArray. The lack of guardrails when using these features does lend itself to more use, and while I can't empirically say that it directly leads to more crashes, it does seem that brazen use of them contributes to it.
Nim uses exceptions. Exceptions are less reliable compared to modern error catching methods.
I don't like exceptions either, but they are certainly reliable. You can reliably catch them, and they don't act in a way that would surprise anyone. I've never heard anyone call Java's error handling unreliable, and it's known for using exceptions.
explicit control over how memory is allocated by passing in allocators (which Nim does not have).
You can certainly write libraries against an allocator type and pass it around everywhere explicitly that doesn't require a new language...
It's just that the Nim stdlib does not do it and the idea is bullshit.
You can certainly write libraries against an allocator type and pass it around everywhere explicitly that doesn't require a new language... >It's just that the Nim stdlib does not do it and the idea is bullshit.
Regarding allocators: as with most things, I believe this depends on the context, i.e. what software you're building, how frequently the code that may perform allocations/deallocs will be called, how long the program will run for, how much time you have to optimize the code, etc.
Different allocation strategies can lead to performance benefits and sometimes simplify the resulting code.
In your own words, "optimization is specialization" (I assume you wrote this point in "Zen of Nim"): using a different allocation pattern can be seen as a form of this. Addressing your argument, in this sort of situation, it's probably better to roll your code than try to use the stdlib with custom allocation strategies.
I recommend this talk for a concrete example on the benefits of allocation strategies (and to support your point that "optimization is specialization"): https://vimeo.com/644068002
Is it worth adding allocator support to the stdlib, e.g. as a template argument or as a global define? Maybe. It does seem like quite a bit of work that might be better to be done as a separate third-party library.
The idea is bullshit for Nim's standard library and in general too for lots of reasons: Premature optimization, code complexity, verbosity and lots of error prone code. Mixing allocators is both harder to do than you would think and at the same time often wastes efficiency.
For example, when you do runtime allocator polymorphism every seq and string in the entire program may need an additional of 8 bytes so that they can store a pointer back to the allocator so that these data structures can actually grow...
I recommend this talk for a concrete example on the benefits of allocation strategies
It doesn't seem to contain anything new to me.
Custom allocators do not and must not dictate program organization. Here is a list of 3D engines I studied:
More or less they all do the same: They provide custom containers where the allocator is a template parameter. Fair enough. And then they rarely use it anywhere because these things are viral and would blow up the entire codebase with allocator parameters everywhere if used too frequently. These are sane, well optimized codebases with requirements such as "must allocate this buffer on the GPU". Yet the allocator stuff is lost in the noise and that's how it should be.
Those are fair points for the standard library. The only remaining argument I can see for custom allocators is that specific systems need specific allocators, e.g. fujifilm camera's firmware pre-allocates large arrays for the whole system memory (see https://wiki.fujihack.org/rtos/#memory-management), note that this project is in its infancy
On performance, it makes sense to roll with your own thing for your specific use case, e.g. a linear allocator/arena for writing scratch memory per frame in a game/interactive application for draw commands, logging, etc.
For example, when you do runtime allocator polymorphism every seq and string in the entire program may need an additional of 8 bytes so that they can store a pointer back to the allocator so that these data structures can actually grow...
Honestly, I don't know why you would want to do this anyway. Is this for a stable ABI? To mix allocator types?
On performance, it makes sense to roll with your own thing for your specific use case ...
Exactly. So the stdlib does not need to support it. It would only get it wrong anyway and at the same time lure too many people into these poor viral designs.
pools and arenas are the main use cases of custom allocators.
Sure and they already work fine without stdlib support.
And I'm saying that as probably the top zero-alloc / custom alloc dev in Nim (custom memory pool in Weave fro multithreading, zero alloc in Constantine for most cryptography).
What matters is having escape hatches. You have them in:
For your own types, you can use the ptr object and with useMalloc, you can swap the malloc/free impl for the one of your choice, even for stdlib types.
The Nim allocation flexibility is great, my only issues is closures & closure iterators, I'd like them to not be ref by default and that ref is a choice (or a compiler decision based on escape analysis)
Those are fair points for the standard library. The only remaining argument I can see for custom allocators is that specific systems need specific allocators, e.g. fujifilm camera's firmware pre-allocates large arrays for the whole system memory (see https://wiki.fujihack.org/rtos/#memory-management), note that this project is in its infancy. But at this point - it's probably better to support this in the compiler's memory library?
For these cases that require special handling just use smart pointers. They're really pretty simple to use.
Though passing data to a library can be a bit painful if it's not designed for it. However, many support using generic types like openArray[byte] or similar so that you can use various backing array types. Ideally the library would use generics or concepts to allow you to pass more complex types, but again it depends on the library.