Hello, just wanted to share my post on dev.to which is kind of an overview of ARC/ORC. I don't have experience writing posts like that, so please point out to errors if you find any :) I think that one of the ways to promote Nim is to write posts like that on different websites (habr, medium, dev.to, etc).
Post itself is https://dev.to/yardanico/what-are-arc-and-orc-in-nim-3191
I love it. Very good job!
Maybe I would phrase a couple of things differently, but everything seems to be correct. (And that's rare!)
Thanks a lot for your feedback :)
I'm planning to post it on https://habr.com - both the original English version and a translation to Russian, since habr.com is the most popular Russian IT/tech-related website with user-made articles.
Interesting. This helps me understand what's going on. I was a big fan of "owned refs", but I didn't follow the discussion last year. I wanted to wait for the feature to become available before diving in. Then ARC became the top choice.
ARC is very nice. I'm still a fan. But I'm not sure it solves my main problem: To be able to avoid copies in multithreaded code without resorting to raw pointers. My usual model is to create a heavy datastructure in the main thread, and to use parts of it in worker threads. I had assumed that if my instance is not "var", then I could pass it into the thread as an alias, detected by ARC. But from reading the forum I think that it might be copied. This for me is the part that requires the most clarification. Maybe I need to wait for "owned refs" to be fully implemented?
FWIW, I also loved the old "memory-regions", but that didn't solve my multi-threading problem.
proc `=destroy`*(event: var FlowEvent) =
  if event.e.isNil:
    return
  
  let count = event.e.refCount.load(moRelaxed)
  fence(moAcquire)
  if count == 0:
    # We have the last reference
    if not event.e.isNil:
      if event.e.kind == Iteration:
        wv_free(event.e.union.iter.singles)
      # Return memory to the memory pool
      recycle(event.e)
  else:
    discard fetchSub(event.e.refCount, 1, moRelease)
  event.e = nil
proc `=sink`*(dst: var FlowEvent, src: FlowEvent) {.inline.} =
  # Don't pay for atomic refcounting when compiler can prove there is no ref change
  `=destroy`(dst)
  system.`=sink`(dst.e, src.e)
proc `=`*(dst: var FlowEvent, src: FlowEvent) {.inline.} =
  `=destroy`(dst)
  discard fetchAdd(src.e.refCount, 1, moRelaxed)
  dst.e = src.e
Note: they weren't tried with seq/strings/ref types but plain objects are working well.
The post is now featured on Nim's homepage:
https://nim-lang.org/blog/2020/10/15/introduction-to-arc-orc-in-nim.html
You can discuss it here, or if you prefer you can do it on Reddit or Hacker News.
My usual model is to create a heavy datastructure in the main thread, and to use parts of it in worker threads.
If your heavy datastructure is read-only that's OK. But if worker threads are modifying (different) parts of the same data structure then your cache performance will suffer. Each write operation will dirty cache-lines across all the cores causing unnecessary cache fetches and CPU stalls degrading performance.
There was a great talk by Scott Meyers about cache performance. It is a bit dated (2014) but is still very relevant https://www.youtube.com/watch?v=WDIkqP4JbkE
Just wanted to share that one of the Nim telegram chat (https://t.me/nim_lang) members made an Italian translation of the article - https://rc-05.github.io/articoli/arc-orc-nim
And I made a Russian translation today - https://habr.com/ru/post/523674/