I was reading through the sources of nim-forum and wondering how the code in the get "/categories.json" route is executed?
The simplified code of that route:
get "/categories.json":
let data = getAllRows(db, "select * from categories")
resp $(%data), "application/json"
As far as I understand the route handler is async, but it calls the non-async function db_sqlite.getAllRows.
proc getAllRows*(db: DbConn, query: SqlQuery, args: varargs[string, `$`]): seq[Row]
How that request handler would be executed? Would the call to getAllRows inside of it block the async reactor and other parallel async requests?
Yep, it will.
Thankfully SQLite is fast enough that this doesn't matter in practice, or at least hasn't for the past couple of years.
I was thinking about using thread pool, would something like code below with spawn_async work?
template spawn_async(the_result, code) =
var cresult = spawn (() => code)()
while true:
if cresult.is_ready: break
await sleep_async 1
the_result = ^cresult
proc fna(): Future[string] {.async.} =
echo "fna 1" & $(now().second)
spawn_async result:
sleep(2000)
"a"
echo "fna 2" & $(now().second)
proc fnb(): Future[string] {.async.} =
echo "fnb 1" & $(now().second)
spawn_async result:
sleep(2000)
"b"
echo "fnb 2" & $(now().second)
proc main() {.async.} =
let ra = fna()
let rb = fnb()
echo await ra
echo await rb
wait_for main()
P.S.
I also tried proc instead of template, but it won't compile, complaining about GC safety and inability to use closures with threads.
proc async_spawn[T](fn: () -> T): Future[T] {.async.} =
var cresult = spawn fn()
while true:
if cresult.is_ready:
return ^cresult
await sleep_async 1
Sure, that approach works and is what I use in Nim in Action.
That said, it's wasteful since you're busy-looping so I wouldn't use it for anything that needs to be performant. My hope is that we can implement something better (see https://github.com/nim-lang/RFCs/issues/304)
A question about Jester/httpbeast, it seems like it's both async and multi-threaded. So I assume there are N event loops, one for each thread. So even if event loop is blocked for some thread, other threads should still work and not being blocked?
I tried to create example with multiple threads, but it still works as if it's a single threaded.
import os, sugar, times
import jester
routes:
get "/":
echo "started" & $(now().second)
sleep(100)
echo "finished" & $(now().second)
resp "Hello world"
In single thread mode it should serve 10 req/sec. With N threads it should serve N*10 req/sec. I compiled it with 4 threads, see Nim compiler output at the end of the post.
I tested it with wrk, with 2 parallel connections, working 10 seconds. So it should serve 200 requests.
But it served only 97, as if only one thread worked. Why?
wrk -t2 -c2 -d10s http://localhost:5000
Running 10s test @ http://localhost:5000
2 threads and 2 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 206.31ms 15.01ms 306.89ms 97.94%
Req/Sec 4.00 0.25 5.00 96.91%
97 requests in 10.10s, 13.74KB read
Requests/sec: 9.60
Transfer/sec: 1.36KB
The Jester compiled with 4 threads.
nim c -d:release --threads:on -r play.nim
Hint: used config file '/balex/applications/nim-1.4.2/config/nim.cfg' [Conf]
Hint: used config file '/balex/applications/nim-1.4.2/config/config.nims' [Conf]
....................................................................
/Users/alex/.nimble/pkgs/jester-0.5.0/jester.nim(1298, 9) Hint: Asynchronous route: match. [User]
/alex/projects/bon_nim/play.nim(3, 12) Warning: imported and not used: 'sugar' [UnusedImport]
CC: stdlib_assertions.nim
CC: stdlib_dollars.nim
CC: stdlib_locks.nim
CC: stdlib_sharedlist.nim
CC: stdlib_parseutils.nim
CC: stdlib_math.nim
Hint: [Link]
Hint: 96430 lines; 3.051s; 145.43MiB peakmem; Release build; proj: /alex/projects/bon_nim/play.nim; out: /alex/projects/bon_nim/play [SuccessX]
Hint: /alex/projects/bon_nim/play [Exec]
INFO Jester is making jokes at http://0.0.0.0:5000
Starting 4 threads
Listening on port 5000