Hey guys, looking for some advice. I have long running nim server that reads ton of JSON and runs some python scripts some times. Nim server usually only uses 100mb, but some time it jumps to 15GB due to the ton of JSON I am reading. Usually I am done with the JSON quickly and used memory goes back to 100mb but occupied memory stays at 15GB. Occupied memory stays with the process and does not get returned to the OS. So when I run some python scripts I get out of memory.
Simple solution is to just restart my servers when memory > 15GB. But it feels kind of unclean. Another solution is to hack around nim's gc to give back free memory to the os, so that some python scripts could use it.
Reading 15GB of json is a rare event, but it does happen which takes down my system.
What would you do?
Enabling swap on the servers is a thing I could do. But that usually gives unreliable performance and destroys SSDs. OS will just be writing my 15GB of freed memory from nim to the SSD for no reason.
But maybe I am not seeing some thing?
Have a different process setup, some "supervisor" process that calls Nim and Python processes that then should die more frequently in order to return the memory.
This seems like the best workaround if your process architecture allows it. Can you have your server spawn a process that does the JSON reading? If so the process will eventually go away and free the memory.
Thanks everyone!
@Araq I will try out the suggestions:
I don't think I have enough skills for the GC returning memory part.
@kobi GC_fullCollect() only frees memory into internal list it does not give it back to the OS.
@shashlick Yeah I could rewrite whole thing in C or use some thing else, but I would rather stick to nim.