When performance is a thing, a small cache can make a big difference.
Nichecache is a thread-safe generic cache with FIFO replacement policy: https://github.com/olliNiinivaara/Nichecache
It's a fine example of algorithm that looks bad in theory (brute-force looping over all keys) but in practice is hard to beat for small cache sizes due to it's cache-friendliness (CPUs use caches, too).
With 50 lines of code (not counting comments and tests), it's a pretty elegant data structure. Must have something to do with the programming language I am using!
Couldn't help but notice at https://github.com/olliNiinivaara/Nichecache/blob/915317b2349673b1ccf502d39cd3ac3ea4875eb3/src/nichecache.nim#L99
if(unlikely) position < 0:
This is a really interesting AST hack I have never seen before. I'd expect it to parse as unlikely(position) < 0, but it doesn't, even if you remove the space between (unlikely) position.
Ok, technically reason is not GC per se, but thread-local values cannot be accessed from other threads. Also, when cache is full, new items will overwrite old items (idea of FIFO ring buffer) and then they will be garbage collected.
Here's a very simplified example that will cause SIGSEGV:
from os import sleep
type Cache = array[1, string]
proc put(cache: ptr Cache) = cache[0] = "threadlocalstring"
var sharedcache: Cache
var thread: Thread[ptr Cache]
createThread(thread, put, addr sharedcache)
sleep(1000) # ensures that put happens here
echo sharedcache[0]
But the good news is that with --gc:arc this program works as expected.