I'm curious about the direction Nim 3 should go with async runtimes. Many languages have async constructs in them, but the runtime isn't always built into the language's standard library.
I usually see it go one of two ways: either async built into the stdlib/language (like Go), or left up to the community (like Rust).
Including an async runtime in the stdlib will result in the runtime being the winner by default. I think in certain circumstances this is fine, like with Go where that was a design goal, but it can also go badly, like with Nim's asyncdispatch.
Leaving the runtime up to the community can create schisms and quasi standards. In Rust, nearly everyone uses Tokio, and design choices in that runtime become synonymous with the language, even though they're properties of the library. For example, Tokio moves tasks between threads, so all shared memory must be synchronized. This is not a property of Rust async, but many people believe it is because the de facto standard imposes it.
There are also weird cases like JavaScript where there is technically no standard runtime, but it's a managed language without multithreading async capabilities at all, and has a comprehensive spec about everything.
I'm really not sure what the best way for Nim 3 is. My preference is a nice standard runtime, since I'm a big fan of Go and the cohesion it has with its ecosystem, but I'm afraid we might get an inadequate standard like asyncdispatch and then be stuck with that. So I'm really not sure. I think the main case for leaving it up to the community is giving more flexibility to it, and giving more opportunity for application-specific runtimes. Even in Rust, where Tokio is mainly used, there are alternative runtimes like Monoio for specific needs.
Nim does offer "closure iterators" and apart from Nim-CPS the existing async frameworks are based on top of closure iterators.
So my current approach is to "provide even better closure iterators based on CPS" in the form of .passive procs, continuations and system.delay. (Continuations and system.delay also provide the features of std/tasks so we don't need that one anymore.) There is already a scheduler hook that is supposed to be overwritten by any real framework on top of it. However, even that should probably be removed again as the same thing can be provided via plugins.
In general we don't intend to ship much of a standard library, we only give you a compiler that offers a plugin architecture. Then on top of that I hope to offer a "distribution" called Nimlody that provides a coherent unified library including an async event loop.
But to be honest, this will simply not work out well, the standard libray is one of Nim's secret strengths as it means you can get things done in a heavily restricted environment where you need the explicit admission for any piece of software you want to depend on. (Common for banks, and yes, Nim is used in these for high frequency trading etc.) And that's a social issue, not something that can be solved with "Just offer a better package manager". So probably this Nimlody distribution needs to be a full package that includes the compiler to form a coherent whole that people can download&install easily.
I'm a little confused by the "distribution" idea, though.
Well as I tried to say. Ideally it's just a set of libraries that have been designed together and provide much of what today's standard library offers. Practice will probably make us ship this one with the compiler so that it's a single package.
Leaving the runtime up to the community can create schisms and quasi standards. In Rust, nearly everyone uses Tokio,
Lol, these two sentences are kind of funny together - which way will it be?
Lack of quality is what creates schisms, not whether it's shipped with batteries or not, and conveniently you point to a good example - tokio is of high quality and developed together with the upstream language in symbiosis - ie when the library developers hit a bottleneck in the language, they ask the language maintainers to fix the bottleneck so that the language gains general-purpose features that express concepts (like Pin) at a language level that any library can later use to their benefit (they're useful not just for tokio).
By developing tokio outside and then slowly trickling features into the stdlib / language, features are battle-tested before they are added - fewer useless and unmaintained features added on a whim result.
The fact that tokio is of high quality - both codewise and documentation-wise - leads to users using it and creates a de-facto standard. The fact that asyncdispatch (and in fact most of the nim std lib) is poor is what creates forks and alternatives. If the std lib were good, nobody would bother. When good libraries exist outside of stdlib, people will find ways to use to them no matter what the legal department says.
Crucially, it has little to do with batteries included or not. At best, being in the std library is a subsidy that allows sub-par libraries to survive longer than they should, because of the added convenience making up for the lack of quality.
Instead, one can look at a few things that end up shaping the community and its development.
For example, powerful language primitives and powerful tooling creates the fertile ground for a community to step in and create high-quality libraries independently of the language curators. Your community becomes your strongest asset.
Tight top-down integration and control instead cater to business usage where the developers are consumers rather than peers. Smooth and easy, like a eating a doughnut or watching youtube (ads).
Since we're on the topic of go and rust, we can make a few more observations:
The last question in particular is probably more interesting to focus on.
Lol, these two sentences are kind of funny together - which way will it be?
Well, calling out a quasi standard does fit. As far as schisms go, there was not consensus on whether to use Tokio or async-std until a couple years ago, and even now, I find myself working in ByteDance's monoio runtime because of some of its properties. That that it's a bad thing necessarily, but it's important to point out.
Lack of quality is what creates schisms, not whether it's shipped with batteries or not, and conveniently you point to a good example - tokio is of high quality and developed together with the upstream language in symbiosis - ie when the library developers hit a bottleneck in the language, they ask the language maintainers to fix the bottleneck so that the language gains general-purpose features that express concepts (like Pin) at a language level that any library can later use to their benefit (they're useful not just for tokio).
By developing tokio outside and then slowly trickling features into the stdlib / language, features are battle-tested before they are added - fewer useless and unmaintained features added on a whim result.
The way you lay out the evolution of a language based on community libraries is pretty compelling, though. I really like that stance, and it would probably work well for Nim, given that it's a very general-purpose language.
Crucially, it has little to do with batteries included or not. At best, being in the std library is a subsidy that allows sub-par libraries to survive longer than they should, because of the added convenience making up for the lack of quality.
I don't agree with this at all. A capable standard library reduces the number of external dependencies needed to do all sorts of things, leading to reduced complexity for the developer, and fewer points of trust. As someone who tries to avoid 3rd party dependencies as much as possible, this makes my life much easier. In the case of Go, I already trust the Go team, so many projects I work on only need one point of trust. Each point of trust you introduce is a liability.
Instead, one can look at a few things that end up shaping the community and its development.
For example, powerful language primitives and powerful tooling creates the fertile ground for a community to step in and create high-quality libraries independently of the language curators. Your community becomes your strongest asset.
Tight top-down integration and control instead cater to business usage where the developers are consumers rather than peers. Smooth and easy, like a eating a doughnut or watching youtube (ads).
I think the community aspect is your strongest point for leaving an async runtime out of the stdlib. Nim doesn't have a ton of money and access to the world's greatest language designers with infinite resources to throw around. I would push back on this if it was indeed run by Google or some other megacorp, but you are right. I don't agree with your framing that it's just lazy, though. Even in my hobby programming, I'm trying to reduce complexity and dependency whenever I can, and tight integration helps a lot with that.
Thank you for going into this kind of detail, especially the philosophical considerations.
which applies more closely to Nim, ie what does Nim want to be, philosophically?
It wants to be a language with "batteries included". Because as much as you want to shit on the existing lib, it has undergone security reviews, many bugs have been fixed and its style is largely consistent with itself.
That said libraries that don't age well as they target an ever changing spec (HTML, SQL, databases, XML, interestingly even Unicode is among this list!) are better left out.
That said libraries that don't age well as they target an ever changing spec (HTML, SQL, databases, XML, interestingly even Unicode is among this list, but we make an exception for that one!) are better left out.
Eh, it's better to include basic support for them, at least the well-understood parts. Like Go's SQL interface and Java's JDBC not actually including any DB drivers. Or some basic sanitization and parsing for HTML, but not a list of tags or validator for dangerous tags. Just include the parts that don't, or are very unlikely to, change.
If nim3 is going to ship with a new standard library, I’d expect that many of the natural mistakes that were made when designing the original stdlib could be fixed (including the choice of what to include in it!).
In a way, this should make it possible to get the best of both worlds: a batteries included language with a battle tested design (that learned from the experience of the first std lib and the best ideas that have been made by the community for nim v1/2).
The only (large!) drawback would of course be the lack of backwards compatibility ¯_(ツ)_/¯
Each point of trust you introduce is a liability.
This is a point about provenance, not about whether you ship a release with batteries included or not.
Provenance is a piece of social information where you extend trust based on past performance. You can solve for provenance in other ways than shipping "batteries included".
There's nothing (ahem, well .. tooling) preventing the nim-lang team from maintaining a set of libraries that are independently versioned and curate that set of libraries on behalf of users, either by recommending them in the manual or some package list, by having special tooling support for them (nimble install curated), and / or by maintaining them as separate git repos under nim-lang/xxx - what you ship is then the package manager and it deals with everything else, compiler and library versions included. nimble vendor recommended and you have a copy of these libraries, all adhering to "these libraries are trusted by the Nim team" and working as-if they were shipped in a release (but now with a more flexible process behind them).
gcc and glibc have a common origin in GNU which makes people use the two together for convenience and trust but they don't hold up each other's release schedules and if gcc fixes something, it doesn't have to wait for glibc to get their things together for a release. In the past, distros would serve as "package manager" - nowadays, this often is an ecosystem-specific tool since distros themselves don't have resources to manage the wealth of packages.
As a recent example, re vs nim-regex would be trivial to manage if it was a separate package because instead of the language maintainers having to decide when to drop support for the former (ie spending mental cycles on it, debating whether it's still used, opining whether a distro is right to drop nim because of it and so on), users could do so on their own.
At the end of the day, the compiler and/or language version is just another dependency of your software just like the json library and the db access tool - being deliberate about flexibility here is a powerful feature for users that gives them agency and allows them to contribute to the ecosystem as a whole, growing a community of invested developers.
backwards compatibility
This is the secret sauce of separate libraries: there is a clear migration path where, when you hit an issue that can't be resolved in a backwards-compatible way, you just release a new (major) version - users can then upgrade that one part independently, or not, depending on whether they judge the breaking change to be worth the hassle. The old versions remain indefinitely for anyone that wants them, and upgrades are done when needed, not when forced to by an uncorrelated change.