First of all Merry Christmas! :)
After months of exploring Nim, I realized this is the one for me - meaning I am here to stay - so I thought I might as well say hi!
I'm still studying CS at university, so my skills are lacking, but I am eagerly covering ground every day. Programming languages as a topic itself fascinate me; I hope I can contribute to Nim soon!
- destructors
- incremental compilation
- case object
- comment field
(2) What is Nim's main target audience?
Nim is one of the (rather rare) non-opinionated languages. It's cross-platform, focuses on ergonomics, and it can do low-level. These features probably render it capable for any programming task. While this is great, I wonder if there is some notable direction it's heading because of the community preferences and interests
(3) Concurrency - please guide my thought process; I am totally lost! >_<
- What is the programming language's task when wanting to support concurrency (since a lot of stuff are OS's responsibility)?
- Does using a VM unlock possibilities for different concurrency strategies, while compiled ones have less options?
- What is Nim's state compared to mainstream solutions (Elixir, Go, Rust)?
(3) 1. Tough question. Ultimately the exising PL/OS split is an unfortunate legacy and concurrency belongs into the language.
Hi! Please don't be afraid to ask "stupid" questions! It's totally ok. Luckily, the Nim community is not a bunch of programming snobs who will roll there eyes when asked questions.
Also, I have no idea about what the connection between destructors and concurrency is, so I'm eager to learn that too :-)
I have no idea about what the connection between destructors and concurrency is,
Currently passing data between threads is restricted (Channels, unsafe pointers ...), as we have a thread local Garbage-Collector. I think when Destructors fully work, they can replace GC ref objects in many places, so multi threading and parallel data processing with fast exchange of large data packages will improve.
So... is this accurate?
Today: Each thread has its own memory heap and GC. This eliminates pauses and race conditions but impairs efficiency because the communication between threads is less nimble (can only exchange messages but no other data).
Tomorrow: ARC will allow a shared-memory model implementation. This will improves data locality significantly and eliminate the need for message passing, all while retaining safety.
Yeah, that's one way of putting it, but whether we change the default from today's model entirely to a shared heap architecture remains to be seen.
Does this mean that Nim can achieve shared memory with perfect safety and convenience?
Unlikely but we are in a good shape, we can statically prevent deadlocks and data races.