I just noticed that my little WebSocket-based server is leaking memory like crazy — about 1GB for every time I run the test suite (serving ~40MB of data.) If I turn off --gc:orc, the leaks go away. (This is on devel from a few days ago, not 1.2.x.)
Unfortunately my macOS heap-profiling expertise is useless because Nim has its own allocator ... so all I can learn from the vmmap tool is that the process has allocated 1.3GB of address space using vm_allocate. With the default GC it's only 90MB.
In the standard library's memalloc.nim I found getOccupiedMem, getFreeMem, and getTotalMem; logging these after a run shows
Memory: 1331696848 used, 32473088 free, 1381838848 total
With the default GC it shows
Memory: 47907600 used, 44965888 free, 102473728 total
which is more what I'd expect ... and the total doesn't increase after multiple runs.
Is there anything I can do to investigate this further, or should I just file a bug report?
Gosh dang it to heck! 🤬 I can't seem to keep track of which features are incompatible with which other features. So ARC/ORC works with threads but not with async. And non-ARC works with async but not with threads (in the way I want, i.e. passing objects between threads.)
Is this async leak supposed to be fixed by the time 1.4 is released?
Is this async leak supposed to be fixed by the time 1.4 is released?
Yes. You can help by reporting a bug.
What do you want to do with threads that you cannot achieve with the default GC?
Move objects between threads. So far my code is single-threaded, but for performance I plan to implement something like Actors, where objects can run on different threads and send messages to each other. (And I need the messages to be more than just strings.)
You can help by reporting a bug.
I can do that, but this isn't trivial to reproduce since it requires a matching client app written in C++, and some JSON files, etc. etc.