import std/[strformat]
import std/[os]
import std/[times,monotimes]
import std/[coro]
var fibers: seq[CoroutineRef] = @[]
proc doSleep() = sleep 3000
let st = getMonoTime()
for i in 0..<10:
fibers.add start(doSleep)
for i in fibers.items:
i.wait
let ed = getMonoTime()
echo fmt"elapsed: {(ed-st).inMilliseconds}ms"
SIGSEGV: Illegal storage access. (Attempt to read from nil?)
Segmentation fault
I don't think raw coroutines are copyable or movable anyway, they would need to be created in-place.
And they are incompatible with GCs.
You need to run() it first before even waiting those fibers. Start doesn't run immediately.
import std/[strformat]
import std/[os]
import std/[times,monotimes]
import std/[coro]
proc doSleep() = sleep 3000
var fibers: seq[CoroutineRef] = @[]
let st = getMonoTime()
for i in 0..<10:
fibers.add start(doSleep)
run() # add this
for i in fibers.items:
i.wait
let ed = getMonoTime()
echo fmt"elapsed: {(ed-st).inMilliseconds}ms"
Though it shouldn't crash when there's nothing to wait.
FYI, Fiber is used as cooperative multitasking instead of parallel execution and it uses single-thread.
Here's the example:
from std/strformat import fmt
from std/coro import run, CoroutineRef, start, wait, suspend
from std/os import sleep
import std/[monotimes, times]
proc job(n: int): (proc()) =
proc() =
for c in "ABCD":
echo fmt"worker {n:0d}:{c}"
sleep 100
suspend()
const workerTotal = 10
var workers = newseq[CoroutineRef](workerTotal)
let st = getMonoTime()
for i, w in workers.mpairs:
w = start(job(i))
run()
for w in workers:
w.wait()
let ed = getMonoTime()
echo fmt"elapsed: {(ed-st).inMilliseconds}ms"
# should elapsed ~4000ms because it has 10 workers with
# each job has 4 works unit of 100 ms so:
# 4000 ~= 10 * 4 * 100
As @mratsim mentioned, you cannot move the worker so you can't distribute those to other runners in separate threads.
UPDATE:
import std/[strformat,strutils]
import std/[os,cmdline]
import std/[times,monotimes]
import std/[coro]
let fiberCount = try: paramStr(1).parseInt except: 1024
echo fmt"{fiberCount = }"
var fibers: seq[CoroutineRef] = @[]
proc doSleep() = suspend 3.0
let st = getMonoTime()
for i in 0..<fiberCount: fibers.add start(doSleep)
run()
for i in fibers.items: i.wait
let ed = getMonoTime()
echo fmt"elapsed: {(ed-st).inMilliseconds}ms"
1024 fibers:
fiberCount = 1024
elapsed: 3029ms
40000 fibers:
fiberCount = 40000
out of memory
std/coro actually does stackful coroutines, i.e. fibers.
A fiber needs to create it's own stack and switch to it.
Nim threads use 2MB stacks, not sure what's the stack size of std/coro but it also needs to be sizeable.
See also "Fibers under the magnifying glass": https://www.open-std.org/JTC1/SC22/WG21/docs/papers/2018/p1364r0.pdf
2.1 Memory Footprint
Fibers have comparable memory footprint to that of the operating system threads saving about 1% due to not needing to save kernel context and kernel stack.
Because of the high memory footprint of fibers, several mitigation techniques are used:
2.1.1 Fixed size very small stack
2.1.2 Dynamic stack with guard page
2.1.3 Split stacks/segmented stacks
The Rust link is not abandoning coroutines, but about switching from Memory Footprint mitigation strategy 2.1.3 to 2.1.2. The Go link is similarly about switching from mitigation 2.1.3 to a Go-only mitigation that relies on some useful Go properties. Stackful coroutines are prominent in Lua and stackless(?) in Kotlin, and in D where they're not shown off very well by the "TODO: efficiency" default executor.
As a poor man's OS thread, coroutines are dead, and that's why Java dropped them, but that's not why they're used today. The other use is about their benefits to structuring programs, where reduced coroutines are also clearly useful in the form of generators (Python) and closure iterators (Nim).
I think like Kotlin should have some pretty good articles on coroutines, to explain why it added them, but so far the best article I've seen is about itch.io's use with OpenResty (Lua+Nginx): https://leafo.net/posts/itchio-and-coroutines.html
Of course there is still CPS, where the overhead of a continuation is only tens of bytes, no stacks need to be allocated and no support from the OS is needed to switch contexts.
CPS has been used to build coroutines, actors, fibers and more, and easily scales to thousands or millions of concurrent flows of control.