Hey. I think that the title says it all, but I will give a bit of a background.
I'm a student who loves physics and coding. I mainly code in python and some javascript and I'm looking for a new statically typed language. My topics of interest are data Science, music, art, old games (anything up to the GBA, especially rougue likes ), physics, and reading scifi.
Any benefits of learning nim over langs like Rust, C, Cpp, Typescript....?
One big advantage w.r.t. Python (and other languages) is the ease of using multiple cores for parallel programming. Furthermore, Nim replaces C(++) macros and C++ templates with much easier to grasp concepts. And it's easy to wrap existing libraries in C (and probably FORTAN for scientific computing). I believe it is much more suitable to programming embedded devices -- search this forum.
Furthermore, there is a very helpful community.
I hope Nim will survive. In the past 50 years there have been quite a number of excellent programming languages, but only those which were promoted by big companies did survive, and those weren't the best ones in all cases.
I'll talk a little bit about Nim's comparison with the languages I have used to various extent:
1. Python
Nim shares very similar syntax with Python which IMO is a plus. Both are very easy to prototype in. Both give you a fair amount of OOP support (which I think is a good thing but others may disagree). It is possible to write performance critical and production ready software in PURE Nim. Python can only do so via extension modules written in other languages for performance critical paths. However the popularity and adoption between the two languages is night and day. If you have a Python question, chances are there are three different answers on stackoverflows and a pip library that will solve your problem already. With Nim, you better be prepared to come up with your own solution. Also, Python being interpreted makes debugging and prototyping easier there.
2. Rust
Nim is significantly easier to use than Rust IMO. Syntax aside (which I do believe Nim's is much better), Rust's main selling point (move & the borrow checker) is a constant pain in the ass for new learner. I get the point of it, but when BC makes it impossible to code something as simple as a doubly linked list in safe Rust, maybe it's time to use a garbage collector. As a result I would not put Rust in the group of languages that can be used for prototyping, but Nim certainly is there. Rust does enjoy a slightly bigger community, more public exposure and what I believe is better IDE support (rust-analyzer is very good) and number of libraries etc. I also like the very extensive trait support Rust employs for OOP (I believe Nim supports traits to a lesser extent - correct me if I'm wrong) but that's to each of his own.
3. Julia Nim advantage: better code organization, statically typed, leaner, compiled, better FFI Nim disadvantage: slightly harder to use, TSCD
Let's pivot and talk a bit about the gorilla in the room, Python, for a second first. For all the things it does right, Python has always been #@$$!@ slow in general. This necessitates that people either use a compiled extension that slots into a Python program (traditionally a C/C++ extension) for performance critical tasks or just rewrite the whole chunk of Python program in another language altogether. Julia is in a position where it's being touted by some people as a Python replacement/extension. As someone who is primarily a Python programmer that has been exploring other languages for quite a bit of time for performance reason, I can confidently say that Nim is a superior alternative compared to Julia for this purpose. Julia has a lot of design quirks. You can't compile the code and distribute just the compiled dll/so/whatever. Its OOP support is rudimentary. Its code loading mechanism is limiting (putting everything on the first level, really?). It allows full dynamic typing which makes it easier to use but also easier to make a mess and become just as slow as Python. Interops with Python using Juliapy in an enterprise setting (where you want to distribute everything as libraries and use CI/CD pipelines to control the publishing process) is also no where near the maturity that Nimpy/Nimporter provides for Nim. All in all, I think Nim is a better fit in a professional setting. I see the potential in Julia, but it feels like a lot of its designs are "by scientists, for scientists". As a former scientist in academia, I can tell you that attitude often translates to "by scientists, for scientists only" and it shows.
I don't use C and C++ enough to comment too much about them. But even a C/C++ noob like me can tell you that you have far less chance to hang yourself with Nim compared to doing funky pointer arithmetic etc in C.
I wanted to write more, but most of my points got mentioned already, so:
Nim's "hacking" to "doing real work" time is very good.
As an admin I often find myself just write nim instead of eg ps, bash or python.
don't forget --gc:*rc. Embedded is my day job and I'm consistently reassured that the Nim devs keep embedded in mind while moving the language forward.
While Linux embedded keeps becoming more and more economically feasible, the same economic forces apply to smaller/cheaper ICs and I don't see why that should change, there will be bare-metal (or at least rtos) targets for a long time to come.
> that the makers of, say, Python, decided that arrays start with 0
Well, not really Python. Probably the makers of C, or even of BCPL.
If you read what I wrote carefully, I was referring to what I teach. I don't teach C or BCPL. I teach a class that uses Sage, which for all intents and purposes is a DSL layered on top of Python. It was Rossum who decided that Python should start arrays with 0, not Ritchie, let alone Richards.
The reason of the 0 indexes is in how the processor lays out things in memory...
I'm well aware of this; I studied assembly language long before Python was invented.
So if the index would start at 1, you'd have a number of unused bytes at the start of the array.
Not sure what you mean, but languages like Ada, Eiffel, Nim, Pascal, and many others do not waste an enormous number of bytes when you declare, say, array[1000..1001, int]]. They may require a small number of additional bytes to track details, say the minimum and maximum indices and probably the length, but many people judge that worth the cost. (I don't know how Nim does it; my apologies if Nim manages it with zero overhead.)
Python could have done that, too. Considering all the other ways it "wastes" space for the sake of some programmer convenience, it would have been trivial. GvR has probably elaborated on it somewhere.
If your array starts at index 1 and is stored at some address a, when generating code to access to a[i], you don’t subtract one to i, but you use as starting address that of a fictive element a[0]. For instance, if the array contains 64 bits integers, you use a - 8 as starting address when indexing.
As far as I know, this is C which which introduced 0-based indexes. Most languages used 1. In Pascal, the default is one, but may be changed. In APL, it was possible to choose the base, one being the default. In PL/1, arrays start also at index one.
The reason why C uses 0 is that indexes are actually offsets while in languages such as Pascal they are positions. When you write a[0] in C, this is understood as adding an offset to an address. Arrays are in fact addresses. In a language such as Pascal, we don’t think in terms of offset. This is too low level. But, thinking this way in C was natural as C intended to replace assembler in most situations.
They may require a small number of additional bytes to track details, say the minimum and maximum indices and probably the length, but many people judge that worth the cost.
They will require additional data only if they are dynamic arrays. If you declare var a: array[5..100] of integer; in Pascal, there is no loss of memory. The compiler knows that the fist element starts at index 5 and will do the needed translation when computing the offset (and good compilers avoid the subtraction).
If the array is dynamic, you need a descriptor which contains the lower bound and the upper bound (or the length). Compared to Nim open arrays, I suppose the only difference is the lower bound. As far as I know, open arrays in Nim do not keep track of the lower bound of the actual parameter. If you transmit an array starting at 1, its lower bound in the procedure will become 0 which I find disturbing. It is better to transmit to open arrays only arrays starting at index 0 to avoid surprises.
They will require additional data only if they are dynamic arrays.
Pascal, yes; its straitjacket approach to arrays is the only reason Brian Kernighan gave for describing Pascal as not his favorite language that I remember, because his complaint about arrays really is something I remember struggling with myself.
But for instance the following Ada procedure:
type myray is array(positive range <>) of integer;
procedure jest_testin(a: out myray) is begin a(10) := 10; end jest_testin;
...modifies element 10 of a regardless of where a starts. What's more, in the case where a'first > 10, it raises an error at run time. So the bounds are part of the myray type even when the program passes a static array in for a.
I have the impression that Nim won't allow that; you'd have to use a seq instead of an array. I regret that I don't know/remember enough Nim to say if it would do that, so I'd be grateful if someone would chime in.
Your "myarray' type has no static bounds, so as soon as you use it, a descriptor is created. When you transmit a static array to a parameter of this type, the compiler generate code to build a descriptor which contains the lower bound, the upper bound and the address of the actual data. Pascal, even in its ISO limited version, has open arrays which allow to do the same thing, except that the lower bound is not transmitted. In fact, open arrays in Pascal seems to be the same thing as open arrays in Nim.
And, you are right. In pascal and in Nim, when accessing the tenth element in an open array, this is the tenth independently of the lower bound of the array. Checking that we don’t access outside of the array is enough to guarantee memory integrity. Ada does better as it checks that, in the procedure, the index is always in the bounds of the actual array.
Note that I have encountered other languages which checked the lower bound and the upper bound at runtime, using descriptors as in Ada. The French department of defense designed a language named LTR3, several years before Ada 83, which contained arrays with a dynamically defined lower and upper bound. These where not sequences as, as far as I remember, their size was constant. But using these arrays in a procedure implied to transmit descriptors and to check the bounds during execution.
As soon as we have descriptors for arrays, we are able to create sequences. The problem with sequences, is the reallocation when the sequence grows. This is certainly the reason why this was seldom encountered in non interpreted languages. But Algol 68 has flex strings and flex arrays, a feature which was quite modern. And building a compiler for this language was a really a challenge in these years.
Your "myarray' type has no static bounds, so as soon as you use it, a descriptor is created.
Yes, I agree completely; that was part of my point. The other part, which I thought more important, is that the arrays passed in for a are not dynamic; that is, I can't resize them, which is what I understand to be meant by a dynamic array. Array bounds can be determined at compile time in Ada, but they weren't in this instance. If you want to resize the array, you have to use a vector.
If you mean that the parameter a is an open array, then I certainly agree, in which case perhaps we have no disagreement at all. I half wonder if we simply have different notions of "dynamic" in mind.
Does ARC/ORC work on the microsecond time scale or millisecond?
Well, you can expect that ARC works on the nanosecond time scale because it acts strictly pointwise. However, the surrounding OS might have its own runtime, e.g. for handling of malloc``and ``free. This is independent of the language/ compiler though.