Hi all,
I am investigating the possibility of async on embedded systems running a realtime OS.
Since the realtime OS (such as FreeRTOS) supports task/semaphore, etc, it is possible to have async just like Windows/Linux.
But how?
All suggestions are welcome.
It is possible to create async/await as a library with just macros as asyncdispatch and chronos are completely implemented as a library and you could use an external C library and provide the same API as Nim on top. For example if libuv is suitable there were 2 projects that wrapped it: reactor, and the old stdlib libuv.
Now what do you mean by async?
When you mention task/semaphore it seems like you are talking about task parallelism, i.e. parallelizing compute like C++ async while Nim async/await is about concurrency especially networking and disk IO like C#, Javascript, Python and Rust async.
For concurrency there is no way to not depend on system primitives like epoll (Linux), kqueue (BSD), IOCP (Windows). Well you could do thread-based async but it's less efficient than event-based using kernel facilities.
For parallelism, yes this is possible, I have also done plenty of research for real-time parallel scheduler.
For something like an embedded system the current async/await is likely to be far too heavy (it allocates far too much on the heap).
What I would personally like to see is alloc-free async/await which is, I believe, what Rust achieves. Since you're working on an embedded system already, this style would fit really well for you, and if it works out well maybe we can adopt it for Nim 2.0. You should be able to get a basic prototype up and running quite quickly using the Nim selectors module which abstract a lot of the IO stack for you.
@dom96
What I would personally like to see is alloc-free async/await which is, I believe, what Rust achieves.
I would also like to see this happen. It's one of my long term goals.
My newruntime async experiments are not exactly this, but it is a step in this direction. My "disposable async" makes the memory usage much more deterministic. (i.e. more ammenable to memory constrained / GC - challenged environments)
It's based on the "extrustion pass" described in this paper that @araq pointed me to: https://hal.inria.fr/inria-00537964v2/document
Ultimately though, to make this work, what is needed is "closure free" async. But, the state that was kept in the closures needs to go somewhere! Rust solves this by placing all the state in the Future object itself.
I can see a future where Nim Futures grow similar super powers :-)
Some bookmarked articles I have about how Rust achieves it's async optimizations: https://tmandry.gitlab.io/blog/posts/optimizing-await-1/ https://aturon.github.io/blog/2016/08/11/futures/