https://www.youtube.com/watch?v=00_i0zd6D-0
A short review of the problems one has to deal with, when working with the Nim programming language.
Rejected NimConf 2021 submission, for some reason :-)
Well, considering that the video basically shows nim as an underdeveloped, overbugged ad-hoc C spitter that you have to manually hack around to get at least somewhat decent experience, it is no wonder the talk was rejected. I don't know, maybe I have different use cases, but the only time I had to deal with any of the problems you have described. Basically filled with dubious claims like "the standard library is good when it works", backed up with a singular examples. Some stdlib modules are certainly lacking, but I don't think there is a reason for such broad generalization.
Also, "an honest look at how easy it is to criticism other people's work" - I would say it is not particularly easy, at least if you are trying to provide actually criticism and not just semi-random claims based on either edge cases or personal misunderstanding (E.g. typeclasses - I still don't understand why people are so hung up on the Type | Type syntax which absolutely must mean sum types or whatever).
.
├── b
│ └── c
│ ├── q
│ └── q.nim
└── z.nim
cat b/c/q.nim
import ../../z.nim
nim r b/c/q.nim
123
- modules work finemodules work fine
What part of "brackets" did you misunderstand?
Why bother? Upstream is focused on ARC/ORC, so if I want it fixed, I need to fix it myself.
I got the patience to dig into its guts, I just need the time to do it.
There's also the ugly hack that starts looking better by the minute: allow replacing that custom GC mem-pool allocator with malloc() and forgo all those optimisations.
I have a love-hate relationship with them
Same. I love writing macros and I hate reading them.
I think we're going in the wrong direction with RC.
I wonder why you think that. Of course ARC and ORC was hard for the devs, but I think the old refc GC was also hard, at least with good realtime behaviour.
ARC/ORC has generally good performance (still with some exceptions), and the deterministic realtime behaviour is a big advantage. And from what I understood parallel processing/threading should work better with ARC/ORC. My feeling is even that VLang tries to implement something similar as ARC/ORC.
I wonder why you think that.
https://en.wikipedia.org/wiki/Garbage_collection_%28computer_science%29#Reference_counting
https://en.wikipedia.org/wiki/Reference_counting#Advantages_and_disadvantages
https://wiki.c2.com/?ReferenceCounting
https://www.perl.com/article/the-trouble-with-reference-counting/
"The Space Cost of Lazy Reference Counting" - https://www.hpl.hp.com/techreports/2003/HPL-2003-215.html
Not necessarily. It makes sense from the point of view of functionality - i.e.: you want to limit the number of CPU-bound threads in your application, so you can use your CPU cores efficiently.
I cannot tell the program how many threads to use until inside main(). Using all CPUs is often terribly wrong. We have only a hack (and a buggy one) to restrict the thread-count later.
But the biggest mistake is that it's begging for thread-safety problems.
You've overlooked the advantages of letting the compiler avoid the overhead when it's provably unnecessary, which is the typical case.
When RC is truly a poor solution, you can use Boehm-Weiser instead.
re: https://github.com/yglukhov/threadpools
Is it used in production, or am I going to be the first to test it properly?
My company uses it for some production code. I have never had a single problem with it.
In my opinion, the global threadpool should be grounded. It writes checks that the body of its code can't cash.