Hello. I've read that Nim is suitable for embedded systems
If you think that the allocation algorithms in the Nim library doesn't work for you then you can always use -d:useMalloc. Then you can pretty much use whatever malloc/free implementation that works best for your system.
For embedded devices I've found their malloc's tend to be terrible.
At least the esp32 ones though I suspect most FreeRTOS ones. Many are implemented as simple linked-lists (which is fine with 4kb of ram). Zephyr's was pretty stable but I never got to test it with a multicore.
TLSF has pretty good heap fragmentation properties as long as you have enough free ram. I had Zephyr devices doing stress tests handling 1M+ RPC calls without any hiccups. Using both useMalloc and nimallocpagesviamalloc IIRC.
Also don't forget to try nimAllocPagesViaMalloc.
It looks like I'll have to go through http://www.gii.upv.es/tlsf/main/docs papers to find out whether the worst case fragmentation is bound and how to calculate it.
@RodSteward, @elcritch We obviously have very different embedded systems in mind. I'm interested mainly in systems which must not fail, where there isn't such thing as "pretty good" and you always calculate with worst case and never rely on empirical data. In these systems traditional heap is not used at all let alone the malloc one. Think controlling industrial machinery, chemical processes, flying devices and other vehicles or even simple things like a home water boiler or just a battery management system for home solar power storage (vs some non-sequential things like smartwatches, infotainment systems etc..)
It looks like I'll have to go through http://www.gii.upv.es/tlsf/main/docs papers to find out whether the worst case fragmentation is bound and how to calculate it.
That'd be great to know. I started reading the papers but didn't have the time to dig into how to get the proper proof.
I'd really like to have a number that if you stay below X% of the heap then it'll never fragment. Please post if you figure it out. Even then it'd require other assumptions like no allocations larger than Y, etc.
Think controlling industrial machinery, chemical processes, flying devices and other vehicles or even simple things like a home water boiler or just a battery management system for home solar power storage (vs some non-sequential things like smartwatches, infotainment systems etc..)
Fair, I figured you were asking about low critical embedded stuff.
For those cases you'd want to write code with purely static allocations. Even arena allocators would run into issues. Even then you also must be careful not to use recursion as well to avoid exploding the stack.
My general approach for those use cases would be to use separate tasks (ideally separate cores). One task that handles networking or comms with a heap. The actual control task would be a simple for-loop that never allocates, and uses shared variables for inputs from the comms thread.
I actually suggested adding a alloc effect so you could do a {.forbids: alloc.} for those sort of cases. I've wanted to follow up on that for a while, but haven't gotten to it. I think it'd be a relatively easy PR.
We obviously have very different embedded systems in mind. I'm interested mainly in systems which must not fail, where there isn't such thing as "pretty good" and you always calculate with worst case and never rely on empirical data. In these systems traditional heap is not used at all let alone the malloc one.
From what I've seen of those systems that may be the theory, but in practice they're often written in terrible buggy C code that flake out all the time. They may require lots of certifications but it often doesn't actually produce very good (or reliable) code. It's largely theater for insurance purposes, IMHO.
The report about the Toyota firmware section was pretty good summary I think. They said they'd follow MRSA standards and do all the "proper" stuff but in reality it was a crapfest. I'd bet $50 buck they did use malloc in critical control areas -- a bet I'd hope to loose.
I've heard too many stories from friends who work in avionics where there were work arounds for buggy firmware in core systems. A previous coworker said the aviationics company he worked for was moving to using FPGA's and verilog to avoid the code certifications -- but verilog is often even harder to prevent race conditions in! A bit terrifying if you ask me.
Overall I don't think what the industry is doing is working now that everyone wants controllers to be smarter, do more, etc. That said, I agree with you about wanting to make those sort of systems. There's a few areas where Nim could be improved on that front. I think an alloc affect and better control of panics.