As far as I know ORC made it possible to share global variables. I'm creating networking service, which maintains stateful Session object in memory. Is that possible to do with ORC? I'm using -mm:orc switch, but it still complains that procs not gcsafe.
import std/[httpcore, asynchttpserver, asyncnet, tables, asyncdispatch]
type Session* = ref object
id*: string
count*: int
var sessions: Table[string, Session]
proc http_handler(req: Request): Future[void] {.async.} =
let id = "1"
if id notin sessions:
sessions[id] = Session(id: id) # <= problem here, global var
let session = sessions[id]
await respond(req, Http200, "Ok")
proc background_processing(_: AsyncFD): bool =
for _, session in sessions:
session.count += 1
var server = new_async_http_server()
async_check serve(server, Port(5000), http_handler, "localhost")
add_timer(1000, false, background_processing)
run_forever()
When I had forgotten a global var in my code, with ORC, I had weird problems that didn't show an error that related to the global var. When I removed it the problems went away. So, I'd say, don't do it.
It's safer to send data over channels, and it's quite quick. So you could put your table in a thread that owns it, and other threads send read requests to that thread, and get data back via channels.
It seems like overkill, but apparently this is the best known method of safely sharing data between threads in Nim.
I just noticed in the Introducing ORC blog post that while example has sessions, it it doesn't use it. How these sessions are meant to be used in the response? The code from example copied below
import asynchttpserver, asyncdispatch, strutils, json, tables, streams
# about 135 MB of live data:
var sessions: Table[string, JsonNode]
for i in 0 ..< 10:
sessions[$i] = parseJson(newFileStream("1.json", fmRead), "1.json")
var served = 0
var server = newAsyncHttpServer()
proc cb(req: Request) {.async.} =
inc served
await req.respond(Http200, "Hello World")
if served mod 10 == 0:
when not defined(memForSpeed):
GC_fullCollect()
waitFor server.serve(Port(8080), cb)
Finally solved, I had to disable the --threads:on switch and mark procs with gcsafe and it works with --mm:orc and without. It's single threaded but I for my needs it's enough.
Just wonder why compiler that proc is GC safe, and requires explicit proc annotation with .gcsafe..
The working code, works with both nim r play.nim and nim r --mm:orc play.nim
import std/[httpcore, asynchttpserver, asyncnet, tables, asyncdispatch, locks]
type Session* = ref object
id*: string
count*: int
var sessions: Table[string, Session]
proc http_handler(req: Request): Future[void] {.async, gcsafe.} =
let id = "1"
if id notin sessions:
sessions[id] = Session(id: id)
let session = sessions[id]
await respond(req, Http200, $session.count)
proc background_processing(_: AsyncFD): bool {.gcsafe.} =
for _, session in sessions:
session.count += 1
var server = new_async_http_server()
async_check serve(server, Port(5000), http_handler, "localhost")
add_timer(1000, false, background_processing)
run_forever()
why compiler needs to explicitly mark that proc as gcsafe? Can't compiler infer that automatically?
Because serve explicitly marked to require gcsafe it is like compiler won't automatically cast int32 to int64 for you.
No, the reason that we need to mark gcsafe in this case seems to be different one. Compiler can infer gcsafe.
I think the problem here is that it wrongly analyses that this proc is not gcsafe and then stops and wont apply that infer.
Wrongly, because it's single thread mode, and so it's safe, yet compiler prints warning that it's not thread safe, I created an issue https://github.com/nim-lang/Nim/issues/21503
Aren't shared table support only non-ref keys and values? It says
The compiler knows the procedure accesses global GC'd memory which is not technically gcsafe as it can create a race condition
There could be no race condition in single thread mode.
since there are other methods of threading that one could use outside of the Nim stdlib
I don't understand. I run nim process as single threaded, the whole process is single threaded, no code in the process could do multithreading.