I’ve been working on an IoT project on Linux. The app needed some threads to do processing work while a parent thread did control logic.
Initially I built some standard threads and a lock around some objects. That became unwieldy pretty quickly as the project grew. Async didn’t fit because some of the workload is computationally heavy.
I pulled in my Sigils library (https://github.com/elcritch/sigils) to work on a new component.
The results have been awesome! I recently switched most of the logic to use Sigils with an “event processing” design.
For example after a reading is taken it emits an event readingReady. This is connected to trigger a data processor function on another thread. However it also is connected to a thread that does control. Lastly it connects to a thread that updates web sockets for a web app.
The signal with multiple slots pattern when combined with threading really shines. Sigils has been thoroughly tested with thread sanitizer and designed to support threading. It plays nicely with async as well.
I hope to post a blog post with example code demonstrating it.
I hope to post a blog post with example code demonstrating it.
Make an AI do it. ;-)
Ugh I just tried and it's meh. At least GPT5 can write succinctly, but it lacks all soul or story.
Oddly come to think of it LLMS would make terrible reporters.
... but it lacks all soul or story.
So add to your prompt the post should have a soul.
Alright alright, well here's a half-AI generated post: https://blog.elcritch.net/signals-and-slots-for-iot-calm-caffeinated-concurrency-with-nim-and-sigils
Note I've rewrote much of it, but haven't edited it, so caveat emptor. ;-)
Good feedback, thanks.
I could turn off syntax highlighting, though I prefer bad colors to none at all. I’ll have to think about that.
I don't understand anything: How come signals have no implementation?
Why would signals have an implementation?! That makes no sense ;p
I didn’t write it up as an intro to signals and slots (or event sourcing or CQRS, etc) but why they work well for data processing and IoT projects.
Event driven systems are pretty useful for IoT / embedded systems. I’ve seen one IoT / sensor companies where devs try and make a “micro-services” architecture on a RPi like board using C++ or Python because they want an event based system but struggle a lot with it.
Many use ROS, or Elixir/Erlang, etc. That’s probably useful background to add as event driven systems are important for embedded systems which are really often miniature distributed systems.
Hmmm I’ll have to expand out and add at least a little discussing the basics as most devs probably aren’t as familiar with it unless they used QT.
What if I want lightweight millions of threads? I mean, I don't, but you bring it up when mentioning Erlang/Elixir...
Most Elixir/Erlang projects don’t have millions of threads either. Though you can run as many Agents on a thread as you want but you do loose pre-emption.
The important piece isn’t lightweight threads but that you can create threaded objects that respond to events. That’s really the important piece of Elixir/Erlang for IoT.
In the simple example, i don't think you (or was it The Model :D) meant to make setValue a slot which emits the signal again...
The Model took that from the Sigils tests which took it from QT’s docs! https://doc.qt.io/qt-6/signalsandslots.html
I thought about changing it though because I didn’t want to go down preventing event cycles..
Distributed is a pain. Why create the pain artificially if things aren't really distributed (are in the same process)?
Because when dealing with hardware and sensors you are dealing with a distributed system.
Note that using Sigils with threads doesn’t make it a distributed system, but an event based actor-like system. That design has a track record of being more resilient and easier to use with distributed systems IMHO.
You are dealing with a distributed system, yes. But how does that justify organizing your code in loosely coupled pieces that exchange messages asynchronously, i.e. like a distributed system? Sounds like some fractal fetish.
I can clearly see how this approach is valuable in modelling a distributed system, but why would it be preferable when implementing the constituents?
You are dealing with a distributed system, yes. But how does that justify organizing your code in loosely coupled pieces that exchange messages asynchronously, i.e. like a distributed system? Sounds like some fractal fetish.
Short answer is that they model the domain at an appropriate level. No fractal fetishes involved, but perhaps a state-machine one!
It's not just me as actor style systems have long been popular in embedded. @dwhall256 recently linked to QP Systems which makes an Active Object (Actor) systems. They've been around for 20+ years and seem popular with NASA, Siemens, BOSH, etc focused on real-time systems.
I can clearly see how this approach is valuable in modelling a distributed system, but why would it be preferable when implementing the constituents?
For simple IoT projects, probably not.
Embedded (and some IoT) systems work with events. There's lots of failures with say a sensor not connecting or going into an error state. Working with actor-style independent sub-systems lets you isolate issues to a given actor. This makes it easier to understand and to test.
Many others point out that actor-style designs you get a lot of benefits and properties despite the complexities of setting up the messages and learning to program with it:
All of those greatly benefit latency BTW. Now my UI's get updated almost instantly while my other code finishes heaview processing.
In my project, I started with a shared object with a lock but it became more difficult to understand when it would block. It increased latency on the UI side since other threads had to poll the state, etc. Then it was harder to balance stale data copied from the shared object to when it changed.
Worse though was trying to handle all of the various failure states and modes in a single large logic tree. Trying to work around the constant comms failures (USB, serial, network), sensors hardware reseting and bad states, API calls failing got really frustrating because all the logic was tied together in a loop in a big intertwined logic tree. Any small change affected every other piece.
Now using the loosely coupled system I have 4-5 core actors (Sigils Agent) on their own threads: sensor state management, db and data sync, data processing, UI updates, etc. Each was easy to test on their own. Now it's easy to work around finicky hardware.
Now when I run into a corner case with say the data syncing due to networking, time drifts, authentication, etc I only need to handle the few signals that it receives and sends out. The actor will just re-try the last event and then pickup with the rest of the events in its queue. Note threaded sigils agents each have their own queue / channel.
Changing to Sigils took me from a big ball of messy interlocking code to a system which is now running in a remote field and recovering from most errors just fine.
It's also fun to add features for my team now as well.