ThreadButler is a package for multithreading heavily inspired by webservers. You define a thread with the kinds of messages it can receive and how it handles those messages (like a webserver defines routes and controllers for those routes). Then I generate a "ChannelHub" (a Table of Channels) where each thread "owns" 1 Channel. It reads from that Channel in its event-loop and works through the messages according to its handlers.
Here an example on how you define and use a threadServer:
import threadButler
import std/[sugar, logging, options, strformat, os]
addHandler(newConsoleLogger(fmtStr="[CLIENT $levelname] "))
const CLIENT_THREAD = "client"
const SERVER_THREAD = "server"
type Response = distinct string
type Request = distinct string
threadServer(CLIENT_THREAD):
messageTypes:
Response
handlers:
proc handleResponseOnClient(msg: Response, hub: ChannelHub) =
debug "On Client: ", msg.string
threadServer(SERVER_THREAD):
properties:
startUp = @[
initEvent(() => addHandler(newConsoleLogger(fmtStr="[SERVER $levelname] "))),
initEvent(() => debug "Server startin up!")
]
shutDown = @[initEvent(() => debug "Server shutting down!")]
messageTypes:
Request
handlers:
proc handleRequestOnServer(msg: Request, hub: ChannelHub) =
debug "On Server: ", msg.string
discard hub.sendMessage(Response("Handled: " & msg.string))
prepareServers()
proc runClientLoop(hub: ChannelHub) =
while IS_RUNNING:
echo "\nType in a message to send to the Backend!"
let terminalInput = readLine(stdin) # This is blocking, so this while-loop doesn't run and thus no responses are read unless the user puts something in
if terminalInput == "kill":
hub.sendKillMessage(ServerMessage)
break
elif terminalInput.len() > 0:
let msg = terminalInput.Request
discard hub.sendMessage(msg)
## Guarantees that we'll have the response from server before we listen for user input again.
## This is solely for better logging, do not use in actual code.
sleep(100)
let response: Option[ClientMessage] = hub.readMsg(ClientMessage)
if response.isSome():
routeMessage(response.get(), hub)
proc main() =
let hub = new(ChannelHub)
hub.withServer(SERVER_THREAD):
runClientLoop(hub)
destroy(hub)
main()
This sets up a threadServer (SERVER_THREAD) that runs in its own thread, while also using threadButlers code generation capabilities for the main thread (CLIENT_THREAD). This example reads from the terminal in the main-thread and sends the text it receives to the SERVER_THREAD, which handles it accordingly.
You can of course also use threadpools for this if you need a one-off task to happen in another thread so it doesn't block yours. Every threadServer in threadButler has its own threadpool - which under the hood is just status/nim-taskpools. You can spawn tasks, including those that need to send the result of their computation back via a message through the ChannelHub.
Currently this only supports the status taskpool because I couldn't get a similar setup to work in neither malebolgia or weave. Both sooner or later required me to call a proc that would've blocked my main-thread which I didn't want to do - I wanted to receive messages when they're done, not wait for a result. If I had gotten that to work, I'd provide options for using them instead of taskpools (not that I favor any of them over the other, just want to provide options).
You can use all of this even if you don't want to set up a dedicated threadServer and just a non-blocking way to use taskpools with your main-thread, see [this example].
In general, the docs and examples should be able to provide a decent understanding over it all.
ThreadButler is a package that exists almost entirely because of a thread by tissatussa and the corresponding github issue.
Their question was in essence "What do I do when I need to do some work in another thread that is not just a single task but requires it to persist, kind of like a server."
I had no answer for that and looked for libs for multithreading for that. At work I was also confronted that it appears to be rather common in mobile app development to have a dedicated thread that acts as a "Backend-Thread" with which you communicate through message passing.
Having found no package with utilities for this usecase, I decided to write my own roughly a month ago, so I can have an answer for this question in the future.
Before I push this out of alpha I want to wrap up writing a proper test-suite for what I have now and possibly iron out more kinks in the API and/or docs. I am for example not entirely certain if the current rigmarole with forward declarations for tasks requiring a ChannelHub instance is a good way to go about things or not (see the "no-server" example.
For that end, any feedback regarding the API and what could be done to make it more legible/easy to use is welcome.
Currently this only supports the status taskpool because I couldn't get a similar setup to work in neither malebolgia or weave. Both sooner or later required me to call a proc that would've blocked my main-thread which I didn't want to do - I wanted to receive messages when they're done, not wait for a result. If I had gotten that to work, I'd provide options for using them instead of taskpools (not that I favor any of them over the other, just want to provide options).
In general, Weave was not written for IO tasks that may block indefinitely on network lost or waiting for stdin. It's for compute tasks, see also: https://nim-lang.org/blog/2021/02/26/multithreading-flavors.html
Your issue of receive vs waiting is often called readiness-based framework vs polling-based framework which is one of the top difference between Linux selectors/epoll vs io-uring (or it's also called push vs pull). There is a lot of litterature on design tradeoffs if you use those terms.
I have a protocol description/implementation that can turn polling isReady (or pulling) into pushing:
The protocol only needs a threadsafe MPSC queue/channel and a threadsafe allocator.
Now looking at a higher level,
Their question was in essence "What do I do when I need to do some work in another thread that is not just a single task but requires it to persist, kind of like a server."
This is not the job of a threadpool, threadpools are stateless, the job is done once the task ends, hence why they aren't suitable for this use-case.
What you're building is microservices. You can have a look at how I sketched mine for a full app here:
The important procs are eventLoop, eventLoopWorker, eventLoopSupervisor.
Lastly, for microservice communication, I had something simple like this in mind: https://github.com/mratsim/blocksmith/blob/master/cross_service_calls.nim
I don't think there is a need to have an actual threadpool implementation behind, just use long-running threads and if the end-user needs extra oomph, they will add a threadpool, or a GPU-pool.
To nail down the problem you solve, you need to create user stories. For example, let's say you want to build an online service that modify images or videos, you'll have the following layers:
So I think you should focus on creating init, eventLoop, teardown for threads managed by ThreadButler, create your ways to send/receive messages and tasks in between the microservices. However this should be done on top of regular createThread, and if people need specialized threadpool they can pick some between Malebolgia, nim-taskpools or Weave.
Right now between this feedback and leorize I have 2 directions I can take this project. Given that either are a fundamental change to what it currently is, I archived the repository for now.
The question becomes whether I stick with either
Or B) I strip out the long-running threads and just make this an abstraction over status/nim-taskpools that can be used for all kinds of tasks including IO, given that they can send a message back when they're done, which is read from by the main-thread controlling the taskpool. For IO it would be mostly sync-IO that blocks that particular worker-thread until it is done.
I'm struggling to make the decision here and I feel like that takes more than just a simple reading of a single blogpost to make. Though it did help put things a bit into context.
The amount of additional understanding work required just makes it seem like basically I'm first supposed to do a couple months worth of studying besides work before capable of producing anything worthwhile anyone's time in that regard.
Or B) I strip out the long-running threads and just make this an abstraction over status/nim-taskpools that can be used for all kinds of tasks including IO, given that they can send a message back when they're done, which is read from by the main-thread controlling the taskpool. For IO it would be mostly sync-IO that blocks that particular worker-thread until it is done. That would be stripping it down to a pure "Executor-Design" according to the blogpost.
I would say this is more similar to an actor model. For example in Rust land, actix builds on top of the Tokio threadpool.
The API is pretty simple: https://actix.rs/docs/actix/actor/
Otherwise Pony-lang is a good inspiration and the actor part is simple C: https://github.com/ponylang/ponyc/blob/main/src/libponyrt/actor/actor.c
Re: studying
I wish you luck. Before creating Weave I did 2 other threadpools (can be seen in Weave commit history), but I went from the start with the idea that I'm just exploring the field and will likely throwaway everything.
You can see in 0.1 of Weave that I looked deeply into 2 other schedulers and reimplemented some until I know enough whether that direction is promising or not: https://github.com/mratsim/weave/tree/v0.1.0/experiments