I just saw @gradha posting this on his blog: https://gradha.github.io/articles/2015/02/goodbye-nim-and-good-luck.html
As I saw a lot of interesting code by him (for me as OS X user at least) I wonder about the details and what it could mean for me.
I am missing a good GUI option already. I need something I can use across systems and I would prefer wxWidgets for its native look and some experience with it. I thought that the recent advancement in C++ will help with that. UIP may be an option for windows but I did not yet went through the setup to compile it for OS X and if that ends up in XQuartz/X11 it is no option for me.
Multithreading also is an important factor for me, but that mostly will be limited parallel processing of data (probably with the need to access a database). While experimenting with parallel / spawn I merely just touched the surface of what could develop into bigger problems later. I just don't know.
Currently I am writing a mysql db adaptor "to my likes" because db_mysql() is not exactly what I need and mysql is obviously to low level for everyday code. But I did not even try to use mysql from different threads / using "parallel" or not.
I also experiment with a setup which would let Nim code easily compile and run as PHP extension, because that way I could reactivate some stuff I did with proprietary c++ extensions for our framework in the past. I stopped years ago and moved to "plain" PHP because I just did not like to code stuff in C++ and often it was not even really faster to go that route. This could change with Nim because "fun" and "speed".
So to say the least: it is a bit alerting that there are show-stoppers for a guy which did a lot of cool (and strange...) projects with Nim the last month. Is it really about the dark future in those areas of Nim?
I had the impression that nothing can stop @Araq (and @Def and the others) to create a practical language which also can do GUI and treading in an universal way eventually.
Opinions?
I am not sure if you guys read the Blog Post by @gradha or just my comments to it.
He does not write "theoretical" stuff but he leaves Nim because of practical concerns from a "use it in production" perspective.
He is number 4 in the "all time contributor" list for the compiler and he has a fair amount of his own repositories written in Nim.
My impression is that he did a lot of stuff which is "not yet" good enough to satisfy and he gave up on that. My impression is that he is "Mr. Cross-Plattform development" and especially "spoiled" with what high profile companies give us developers today. In the end he just said: "Goodbye, much luck!"
The problem for me is, that I can't bet on a language which needs "Luck" and my question is: Has he a point or is he just ... disillusioned, took to much of a bite at once, has a bad day, was to early with using Nim for production purposes.
I think nobody expects Nim to have the same polish as a language which is in use for 30+ years. But one may expect that it eventually grows into something like that. Especially as there are many other languages which inspire you.
@Gradha (for my understanding - it may be wrong) says more or less what @axben wrote: It is cool and I thought it would become better with time (.. fast forward some months using Nim ..) but using it for real projects does not work out in the way promised by the fun to write smaller tool.
@LeuGim Yes and if I want GC in Rust I just use it. Well it just did not work for me.
The question is: Will you actually really choose Nim for a production scale project without the GC?
From my current knowledge you are mostly on your own. Writing FFI Code which just give things other names as you can't use anything "higher" from the language (this is exaggerated). Also I guess that the choices for @gradha are not C/C++ but also Objective-C.
I don't want to say that Nim is bad because of GC or that it is useless without. I just want to understand why a person which wrote a lot of cross-plattform stuff is leaving the ship. I am on OS X, I see his stuff with different eyes as a windows user for sure.
"GUI is a matter of wrappers" until you try to wrap and use them in a language domain. It is pretty obvious that Nim was not ready to wrap a C++ 2d/3d Framework. But know it is, people bet money onto that. This is good and I think @gradha may have "missed" that or it was already to late for his feelings.
About db_mysql and threads: Well it is fine that mysql is not storing data but my application will! And my threads will probably share a large amount of information which seems to be a problem in Nim. I thought this could be worked around in different ways. But again: There is a person saying goodbye which was especially trying to do that.
I just want to understand why he leaves and what could be done better. I want to use Nim in production soon. I do not have time to play around endlessly. I even pay money to other developers in the company to introduce themselves to Nim atm.
First of all, I recommend watching this talk by Yaron Minsky of Jane Street, especially the segment at about the 38:45 mark, where he talks about challenges that a minority languages may or may not face. One issue that he doesn't talk about specifically, but mentioned in an earlier talk (at the 44:15 mark), is UIs. Much of what he says applies to any minority language (not just OCaml).
Second, I'd like to address one point specifically, and that is concurrency + automated memory management.
Generally, garbage collection is pretty straightforward if you do one or the other of the following:
Both approaches are pretty straightforward on their own and Nim already supports both (--gc:refc, which is the default, or --gc:boehm).; things get hard (and I mean, possibly-eating-up-developer-decades-of-resources hard) when you mix the two, i.e. if you want concurrent soft real-time GC. Both the JVM and the .NET people (and for that I mean both Microsoft and Xamarin) have invested a ton of effort into making that work and it still remains a really hard problem with noticeable tradeoffs. For example, safe C/C++ interoperability on either the JVM or .NET requires a fair amount of extra care (because any time you have compacting GC, object addresses may spontaneously change and if you don't guard properly against that, you will experience software defects). More generally, any GC that isn't of the stop-the-world kind needs to closely interact with the compiler in often non-trivial ways. That's relatively straightforward if you're just doing sequential code, but becomes an order of magnitude more complicated when you're dealing with concurrency, especially when callouts to external libraries are involved or you have to write external code in C/C++ (and basically have to do everything the compiler does by hand).
Things get even worse when you have a NUMA architecture and need to scale up beyond a single node (i.e. beyond 8-16 cores). While, in contrast, OCaml has had a robust sequential high performance GC (which is generational/incremental and basically soft real-time in practice) for decades, and the implementation is actually pretty simple.
The thing is, few language implementations have a perfect story when it comes to concurrency plus automated memory management, and the tradeoff that Nim makes is one of the better ones if you don't have a near-infinite amount of developer resources (I could talk about what other language implementations do, but that usually ends up non-constructive if people don't understand the exact tradeoffs involved). You can use thread-local heaps with a soft RT GC, you can have a global heap with a stop-the-world GC, or you can use thread-local heaps plus some manual memory management with ptr types when you absolutely need shared memory.
That said, something that I'd dearly like to see as an additional feature (and one that should be relatively straightforward) would be separate, shared memory heaps, which could solve a bunch of shared memory problems while still maintaining the basic robustness and simplicity of the thread-local approach (a shared heap would basically be like a thread-local heap, except without a thread running in it, which could temporarily be acquired by other threads).
@OderWat The problem for me is, that I can't bet on a language which needs "Luck"
If there's no luck, then you're not betting. :-)
I read the blog post, and I'm willing to bet that Nim will get better in the areas that Gradha found lacking, amongst others. His post seemed reasonable, and I couldn't fault him for leaving. If you are doing commercial code for mobile devices, I think Nim may not yet the best choice there. It's too bad that Gradha checked out, because these things get better faster when there's someone pushing for them to get fixed. Vicious circle, I know, and with Gradha out, the time constant just got bigger.
People use all kinds of GC'ed languages in production, but not for everything. What is it that you want to use Nim for? IMO, Nim has most of the right fundamentals to be the wide spectrum language of choice. It's still a bit early, but I think if you can tolerate the bleeding edge, and the external factors (e.g., developing on iOS for iPhone) aren't overwhelming, you should give it a try.
@Jehan That said, something that I'd dearly like to see as an additional feature (and one that should be relatively straightforward) would be separate, shared memory heaps
Would that fit in with the GSoC 2015 "Make Nim a viable platform for GC research" item? Maybe if you squint your eyes a bit and turn your head? Anyways, yes, nice idea for an experiment.
@Jehan thank you for the links and explanation!
If I understand your closing statement right is the missing links something like "Software Transactional Memory" (Haskell stm) or is that unrelated?
brianrogoff: Would that fit in with the GSoC 2015 "Make Nim a viable platform for GC research" item? Maybe if you squint your eyes a bit and turn your head? Anyways, yes, nice idea for an experiment.
No, what I'm suggesting is far simpler. It's low-hanging fruit that fits in with the existing thread-local heaps and probably not enough work to justify a GSOC project, nor would it be an interesting research project (in fact, the opposite: I'm suggesting it because it's simple and there aren't really any open questions involved). It would basically be very similar to what Eiffel/SCOOP does under the hood.
OderWat: If I understand your closing statement right is the missing links something like "Software Transactional Memory" (Haskell stm) or is that unrelated?
No, far simpler. At the moment, I'd be extremely hesitant to bet on STM outside of research projects. STM (at least without hardware support) is still very much at a level where its practical applicability is limited (which is also part of what simultaneously makes it an exciting topic for research). I can go into more detail when I have the time and if there's interest.
@brianrogoff I am not hardcore coding for a long time now. And even in the past I was just using a semaphore to lock something until it was done or being in the privileged position to write interrupt code which owned what it worked on. Since then I had no concurrency problems because either I didn't need it or some abstraction "just worked".
I say this because I want to make clear that I need some babysitting from the language: "Do this and you are good". Stuff should scale "mildly", not crash out of nothing or eat all memory. But it does not need to be more (soft-)realtime than your average GUI should give to not shy the users away.
The other thing is code which takes data from a database, orders it in multiple streams and let it process with "as much cpu's" are available in parallel. Something like this even works currently by starting threads as shell processes. Hardly anything which will cause problems with Nim.
That probably leaves just the "GUI" for those Windows Clients you can't avoid as unknown (my hobby game development will surely work with the stuff @Araq just set on fire).
So for me Nim is mostly OK. I "just" want to take the language which gives the best opportunities and it should compile to "native" and please no JVM.
When it comes down to what I wanted to choose from it is: Haskell (So cool but my brain hurts), Rust (guaranteed safety, like wearing heavy armor to not drown while swimming), Go(back to start lang, I liked it till it got old), (Ocaml - not even tried but maybe pretty cool).
Nim is the only one which I really like. I like whitespace and hate glyphs, I programmed a lot of C and Python in the past but used mostly PHP to run a company for >10 years now. I love doing macros, templates and stuff and really like to play with functional programming (but please not to much).
So basically I just want to get stuff done, lazy ass style with a solid efficiency and safety guarante for my clients and fun for me and the other developers.
P.S.: I could use swift but I can't
@OderWat, I read @grandha 's blog post. Some of what he says resonates a bit with me. In particular, the part where he says that people "are starting to wish for at least a minimal standard library which uses manual memory handling so that Nim can be used without that wonderful GC". I find myself in that camp. My reasons are perhaps not common: I'd like to use nim to create _hard real-time programs.
No programming language is perfect. No language covers all possible use cases. Nim does not (currently) cover my use case, and that is fine. Then, why would I want to use nim for that use case? Because I find nim cool. Because I find its design amazing. Because I haven't been as excited by a programming language since I discovered Python so long ago... Because it addresses pretty much every problem I have with C and C++, and then it brings a lot of things to the table that would be super-hard to do with C or C++.
Personally, my main problem is that I don't really know which parts of the language I could not use if I disabled the GC. I've read the advice that with the GC disabled you can use nim as a better C. But, which types cannot be used with a disabled GC? which libraries? I would like to be able to disable the GC and do a more C++ like memory management (i.e. RAII) and perhaps even be able to use some of the higher level types such as seqs and even strings which I believe currently need the GC.
I am missing a good GUI option already. I need something I can use across systems and I would prefer wxWidgets for its native look and some experience with it.
More GUI wrappers will come, I am sure. More importantly though, I think that Nim's feature set actually allows for entirely new ways to implement GUI toolkits that are not possible in such concise form in any other language. This is actually my main research interest right now. Adding wrappers to the status quo of GUI development will be an important step towards increasing usage of the language, but, ultimately, the real power will come from creating new frameworks for GUI development altogether, and to use Nim's syntactic features to make this more convenient and more powerful than ever before. For example, the built-in macro capabilities open up great opportunities for creating extremely expressive GUI DSLs. The thread-local GC actually makes things easier as well as memory sharing is a major problem in most GUI frameworks I've used in the past. The messaging paradigm - not just for event routing, but for data sharing in GUIs - is still way underused, and I see some great potential here for parallelizing GUI processing and rendering.
So, from my point of view, and based on my past experience with GUI development, what some consider weaknesses in the language I actually look at as major advantages. The constraints that Nim imposes on developers will actually allow people to write robust code by default, without having to think too much about it. All that is needed is to rid oneself of the established programming patterns that are so familiar to us from C++, Java and similar languages. Too much flexibility in a language is actually harmful in my opinion. In C++, for example, there may be 100 ways to solve a problem, and about 95 of them usually lead to bad software design. That means that, statistically, the majority of developers (especially inexperienced ones) will end up with poorly designed programs.
I have not had time to play with the thread-local GC + message passing yet, but I believe that Nim is going in the right direction here.
GUI toolkits that are not possible in such concise form in any other language. This is actually my main research interest right now.
That is very interesting. Can you tell us already more about your ideas and work? (As you may know, I spent some time making a recent GTK3 wrapper -- I think GTK3 is by far the easiest way to get a GUI for Nim, but not all people will really love it. Others like Qt or wxWidgets should be much more work. A Nim GUI -- I have no idea how much work it may be. Qt has 7 million lines of code, but of course only a small part is really GUI related. I guess making a GUI cross platform is the hardest part.)
That is very interesting
+1 to new and exciting Nim GUI frameworks. Maybe something like ReactJS except without the JS part.
Unless it is possible to use newSeq to allocate a seq of a certain size and then I can resize it while keeping the allocated space somehow?
newSeqOfCap is still missing but you can easily implement it this way:
proc newSeqOfCap[T](x: var seq[T]; cap: Natural) =
newSeq(x, cap)
setLen(x, 0)
That is why I would love to have a way to declare variables on the stack
Except closures, ref, seq and string everything is allocated on the stack already anyway.
I'm a bit confused by what you said here. I don't think I can use nim for the hard realtime part of our code unless we can limit the GC pauses to something reasonable (in our environment), which must be well below 1 ms (my guess is around 100 us but perhaps we could accept up to 200 us).
I'm quite sure I can meet that deadline on my machine (note that 100us is not 200us is not 60us, you keep changing the numbers, so I keep changing my answer ;-) ), but that's meaningless since you don't run the code on an Intel i7... That's why you need to do your own measurements. However, simply "get rid of the GC so that I can use malloc/free" misses the point somewhat as the bulk of the work the GC has to do is what malloc/free need to do as well.
If you allocate garbage too quickly and only give the GC 100us to run, you will run out of memory. In the malloc/free scenario you miss the deadline instead. That doesn't help much, does it?
Unless it is possible to use newSeq to allocate a seq of a certain size and then I can resize it while keeping the allocated space somehow?newSeqOfCap is still missing but you can easily implement it this way: proc newSeqOfCap[T](x: var seq[T]; cap: Natural) = newSeq(x, cap) setLen(x, 0)
Nice, that is exactly what I need. So that means that resizing down a seq does is guaranteed to keep the location and size of the underlying memory, right?
That is why I would love to have a way to declare variables on the stackExcept closures, ref, seq and string everything is allocated on the stack already anyway.
That is very good info, thanks. What about user defined types?
I'm a bit confused by what you said here. I don't think I can use nim for the hard realtime part of our code unless we can limit the GC pauses to something reasonable (in our environment), which must be well below 1 ms (my guess is around 100 us but perhaps we could accept up to 200 us).I'm quite sure I can meet that deadline on my machine (note that 100us is not 200us is not 60us, you keep changing the numbers, so I keep changing my answer ), but that's meaningless since you don't run the code on an Intel i7... That's why you need to do your own measurements.
Is not that I keep changing my numbers, but that I don't really know how much time our code uses to allocate stuff on the stack, since that happens implicitly. It is thus hard for me to tell you precisely how much of our time budget we are currently using for memory allocations and deallocations (lets call that "memory provisioning"). That is the time I'd like to let the GC run every frame in the worst case (and that would in fact be a bit worse than what we have today, since nim also uses the stack as you said earlier).
However, simply "get rid of the GC so that I can use malloc/free" misses the point somewhat as the bulk of the work the GC has to do is what malloc/free need to do as well.
That is true, but it also misses the point IMHO. The problem is not how much time is used for memory provisioning in total or even in average, but how much time is used in the worst case. That is, it is not about the "average allocation time", but about the "peak allocation time". I'd be happy to increase the average allocation time if that reduced the peak allocation time (as long as it were still low enough that we could still do the actual work on that worst case frame).
For example, lets say that our code, written in nim and using the GC, needed only 60 us per (1 ms) frame on average for memory provisioning. Lets also say that the equivalent C++ code required 120 us per frame for memory provisioning. If nim's GC could only achieve that by running for 240 us once every 4 frames, for example, that would be much worse than the C++ code despite being twice as fast on average.
So the intent of my proposal for a way to tell nim "make sure to allocate these variables now, and deallocate them when exiting this block or procedure" is not about reducing the total time spent in memory provisioning, but about controlling when that happens (i.e. every frame, every time a given function is called), in order to reduce the peak allocation time.
If you allocate garbage too quickly and only give the GC 100us to run, you will run out of memory. In the malloc/free scenario you miss the deadline instead. That doesn't help much, does it?
In addition to what I said above I think it does help in that it is (I think) easier to find a timing problem than a memory problem. It is quite trivial to identify a sudden increase in the time it takes to execute a function. Finding a memory leak is harder in my experience (although perhaps nim's memory profiler can help there). Also, in one case the fix is "local" (make changes to the function that takes too long to execute) while the other is not (change some function that increases the overall memory use).
That is very good info, thanks. What about user defined types?
ref means that the given type is managed by the GC, it doesn't mean anything on its own.
type Foo = object
x: int
y: float
is exactly the same (barring any syntax errors I may have made) as C's typedef struct { size_t x; double y } Foo
ref Foo is like *Foo, but the memory is automatically freed.
I think gradha had issues because he was trying to use Nim for iOS and mobile development. Objective-C is a pain in the ass. The whole Apple system is basically more closed than open. Trying to get Nim to work with Objective-C is just a hard thing to do.
In general, mobile development isn't friendly to alternative languages. Actually I have been thinking of making an Android app, and decided that I would like to avoid doing it native at all, because I want to use the In-App Billing, and just setting that up in a Java project is a huge pain. Seems much simpler to use Phonegap/Cordova, where it is just a plugin.
I think we already have some examples of doing normal desktop UIs with Nim right? And working fine I assume?
The spawn/parallel thing is very convenient, and you can also use channels. So I really like the concurrency support in Nim, and don't really get why people are questioning it, except that they are trying to fight against the Apple Way or the Google Way.