I'm new to Nim. I want to use Nim both in the microcontrollers and server side microservice apps. I want/need to use lightweight tasks (simply mimicking Erlang or Gevent in Python). So I have the following example code (mostly written by AI):
# simple_scheduler.nim
import times
import std/os
type
Coroutine = iterator(): int {.closure.}
Scheduler = object
coroutines: array[2, Coroutine]
sleepUntil: array[2, float]
var scheduler: Scheduler
template sleep(ms: int): untyped =
yield ms
proc addTask*(self: var Scheduler, coroutine: Coroutine) =
for i in 0..<self.coroutines.len:
if self.coroutines[i] == nil:
self.coroutines[i] = coroutine
self.sleepUntil[i] = 0.0
break
proc run*(self: var Scheduler) =
let startTime = epochTime()
while true:
let currentTime = epochTime() - startTime
for i in 0..<self.coroutines.len:
if self.coroutines[i] != nil and currentTime >= self.sleepUntil[i]:
let sleepTime = self.coroutines[i]()
if sleepTime > 0:
self.sleepUntil[i] = currentTime + (sleepTime.float / 1000.0)
os.sleep(10)
# Coroutine with while true loop and yield - marked as closure
iterator helloCoroutine(): int {.closure.} =
while true:
echo "hello"
sleep 5000
iterator worldCoroutine(): int {.closure.} =
while true:
echo "world"
sleep 3000
when isMainModule:
var sched = Scheduler()
sched.addTask(helloCoroutine)
sched.addTask(worldCoroutine)
echo "Simple Scheduler Started!"
echo "hello every 5s, world every 3s"
echo "Press Ctrl+C to stop"
echo ""
sched.run()
This example works as intended but uses GC. Can I make this not use GC while keeping the following part in the following format:
while true:
echo "world"
sleep 3000
What's the point? I changed one line to:
echo "world ", getOccupiedMem()
Run the program and it reports 64 bytes. It doesn't use the GC, mission accomplished. Please question your AI's results a little bit more.
This particular example is written microcontrollers in mind. Let's say it will actually blink a red led in place of printing "hello", and a green led in place of "world".
Using garbage collector in such systems may or may not be problem (realtime execution predictability, binary size, etc.) so I wanted to see if I can completely get rid of garbage collector for mcu environments.
You don't have to lecture me about how some unspecified GC technology might or might not cause problem on maybe platforms.
I'm lecturing you that Nim doesn't even use classical GC algorithms and so it's a non-issue to begin with. It uses optimizing refcouting plus an optional cycle collector. Not unlike almighty Rust, except it's so convenient to use that you were not even aware what is going on.
Nim runs great on embedded! You also want to run with --mm:arc or --mm:atomicArc to avoid the cycle collector which makes it more deterministic. ARC code is also only generated when you use a ref type as well and even then it's very light overhead.
Any proc that takes an openArray can be used with static arrays or seq's. Also seq's and strings are heap allocated but not ARC'ed. Finally you can use ptr instead of ref if you want to do it old school.