I would like to share the seq[string] variable across multiple threads.
I found the following article with the comment "you should not share any heap-allocated managed objects (refs, strings, seqs) across threads". I also found a notation that says "I would suggest using the standard library's SharedList".
https://forum.nim-lang.org/t/3896
So I decided to give SharedList a try, and created the following code.
import sharedlist
import os
import locks
var thr: array[0..1, Thread[void]]
var list: SharedList[string]
list.init()
proc thredA(){.thread.} =
for i in 0..10:
list.add("A")
sleep(100)
proc thredB(){.thread.} =
for i in 0..10:
list.add("B")
sleep(101)
createThread(thr[0], thredA)
createThread(thr[1], thredB)
joinThreads(thr)
echo repr(list)
However, the following error occurs.
SIGSEGV: Illegal storage access. (Attempt to read from nil?)
Why is this? Also, what is the correct way to use SharedList?
I would appreciate it if you could let me know. Thank you very much.
There's an example of a shared Table at the end of this article: https://nim-lang.org/araq/concurrency2.html
I read an article about channels being high performing in Go when there were a large number of threads. Benchmarking will guide you.
There's an example of a shared Table at the end of this article: https://nim-lang.org/araq/concurrency2.html
Stop it please, that article is outdated and makes my finger nails coil...
Really? Ok, so channels are the only (or main) way to support concurrency going forward? There should be an article or something official that states this. Unless I missed it.
No, channels are far from the only thing we can offer. --gc:orc was a game changer, allowing for techniques that otherwise were reserved for C++ and Rust: We can do atomic reference counting. We can send subgraphs without the deep copies. We can offer more convenient locking mechanisms and lockfree solutions.
However, today's standard library is still trying to catch up with the language; you're much better off with external Nimble packages.
On a related note, how come the following code works:
import threadpool, locks
type SyncList = tuple [
lock: Lock,
list: seq[string],
]
var list: SyncList
list.list = newSeq[string]()
initLock(list.lock)
proc setA(l: ptr SyncList) =
withLock l[].lock:
l[].list.add("A")
proc setB(l: ptr SyncList) =
withLock l[].lock:
l[].list.add("B")
spawn setA(addr list)
spawn setB(addr list)
sync()
withLock list.lock:
doAssert list.list == @["A", "B"] or list.list == @["B", "A"]
deinitLock(list.lock)
But this next one does not?
import threadpool, locks
type SyncList = tuple [
lock: Lock,
list: seq[string],
]
var list: SyncList
list.list = newSeq[string]()
initLock(list.lock)
proc setA(l: ptr SyncList) =
withLock l[].lock:
l[].list.add("A")
proc setB(l: ptr SyncList) =
withLock l[].lock:
l[].list.add("B")
var threads: array[2, Thread[ptr SyncList]]
createThread(threads[0], setA, addr list)
createThread(threads[1], setB, addr list)
joinThreads(threads)
withLock list.lock:
doAssert list.list == @["A", "B"] or list.list == @["B", "A"]
deinitLock(list.lock)
Is there any difference between threadpool and threads that I should be aware about?--gc:orc was a game changer
indeed, the initial code (with minor edit) works with gc:arc/orc.
import sharedlist
import os
import sugar
var thr: array[0..1, Thread[void]]
var list: SharedList[string]
list.init()
proc thredA(){.thread.} =
for i in 0..10:
list.add("A")
sleep(100)
proc thredB(){.thread.} =
for i in 0..10:
list.add("B")
sleep(101)
createThread(thr[0], thredA)
createThread(thr[1], thredB)
joinThreads(thr)
echo collect(newSeq,for i in list:i)
compile with nim r --threads:on --gc:orc
repr/$ don't work, ref https://github.com/nim-lang/Nim/issues/14873 get 'error: pthread_mutex_t has no member named abi'.
E.g., I haven't heard anything from Mamy since 4 weeks. Has he fallen ill, or doesn't he have any time to maintain Weave any more.
He's alive and well but very busy. :-)
So, there must be some confidence that a PL and its essential libraries will be maintained in the future.
Certainly, but it's not easy as "the future" is a long time.