I’d be curious to hear people’s opinions / start a flame war over their experiences with Prologue vs Mummy.
I’m doing an IoT dashboard and started with prologue. I’m not a big fan of async though and am already spinning off a couple of threads. However there is a lot of async libraries out there. Will I run into problems?
I never used Prologue (I did use Jester for a few years, but it's basically abandoned at this point). I have used Mummy for the past few years.
Mummy is a lot simpler - with the included router, it's much closer to HttpBeast/httpx (which Prologue depends on). You'll likely need to mold it to do what you want and remember which HTTP headers do what. I definitely have a file that I copy between projects which has a bunch of utilities/helpers.
Mummy has been used in production by @guzba for several years. I've used it for a couple personal projects of mine that have been running undisturbed for at least a year (100% uptime?).
The threading model is also much simpler to reason about than async/await IMO, especially the stack traces. @guzba has built a couple of libraries that are synchronous - I use curly specifically for HTTP requests. With enough threads you're not going to run into big performance problems, so long as the underlying machine is capable of handling it.
If I did need to use an async library, I'd probably look into calling it synchronously (waitFor Future).
Never had an issue w/ mummy so never tried prologue.
Side note: maybe give Neel and https://github.com/plotly/plotly.js a shot for your dashboard?
From when I benchmarked Prologue a few years ago, it choked on large amounts of traffic and suffered from all the Nim 1 & 2 async problems. In general, I can't recommend using something that relies on asyncdispatch in production. Mummy I only used briefly, but I had a good experience and it didn't suffer with performance or memory.
TL;DR If it's based on asyncdispatch, do not use it in production and seek out something else, be it Mummy or Gildenstern
Thanks guys, definitely encourages me to take the plunge and switch to Mummy! The websocket support looks particularly handy.
I had some initial issues with Prologue but they were due to me blocking the async. Outside of that I wasn't seeing any memory leaks that I could see. Still I've found async generally hard to profile and debug in any language. Also in Nim every async function has to run a big macro expansion which really slows down the compilation.
Side note for Nimony: being able to avoid having to transform the whole function body to re-write try/finally blocks and early returns to cancel futures or set values on futures, etc would avoid so many of the edge cases in making async libraries.
Nice I had GPT5 switch from Prologue to Mummy for me and got a 4.4% speed boost. Not that it matters too much as I'm only serving 1-2 dashboards per device. Still handy! ;)
The code didn't change as much as I thought. Mostly it got rid of async and some tweaks to the json response format.
@elcrith What about deps that uses asyncdispatch? Did you have any?
IMO prologue should migrate to chronos. It was a huge stability improvement in the nimlangserver.
No, I didn't have any asyncdispatch deps. Mummy has non-async websockets. I'm looking to setup a postgres db soon and that'll be interesting to see if that'll need async. If so I'll just use waitFor with asyncdispatch or chronos.
I thought prolgue supported chronos? I didn't check though.
I switched to using mummy at Reddit.
At first I was using an async server (but not Prologue, to be fair). Debugging asynchronous code is a nightmare. The stack traces do not make sense. Asynchronous code is mostly single threaded. I needed to parse a large amount of JSON and it was just blocking everyone else. Asynchronous code does not work for domain resolution, so it blocks on the OS to do that. You cannot really tell which library calls or OS calls will block for a bit, so you end up pegging your single async core a lot. Asynchronous code still requires locks, but support for them is poor. PostgreSQL can only have about 10 clients connected at once, and does operations in a blocking way. So who cares if you can run 100,000 green async threads if they all block on the same 10 PostgreSQL connections? Yes, you can solve or work around all of these issues, but why bother?
Switching to mummy was a great decision. Stack traces are much better. Now I can use a single server with 32 cores instead of 32 servers with 1 core each, and worry about the async code, orchestration, and inter-process communication. Large JSON blobs no longer block others. You can use normal OS locks for things. I can use normal libraries that are not async aware. All threads can just share memory with locks. Modern compute can run tens of thousands of threads already, so why add the complexity of async?
I think the idea of green threads became popular around the mid 2000s, when hardware was not as powerful as it is now. But if you check your assumptions, you will find that you can run lots of threads on modern hardware with the OS helping you rather than working against you.
Use mummy, it is battle tested.
Using SQL in other ORMs felt bad. That's why I wrote Debby: https://github.com/treeform/debby
I like using SQL computations in the database, avg, means, sums, joins, windowing functions, etc.
Normal ORMs make it hard. With Debby it's basically a way to serialize and deserialize objects with manual SQL queries.
It also helps you write simple queries that are just CRUD, but the real work is done in SQL.
I'd like to add some thought in Prologue's defense.
Prologue is a higher-level framework than Mummy. It suits better if you want to get something working fast. It's well documented, it's API is ergonomic, I generally like it.
Mummy more basic, you'll have to write a lot from scratch to make it work for you.
In my opinion, both I great products. But when I needed to write an API server for Cannon Chat, I first tried Mummy but quickly turned to Prologue because I just didn't want to spend time on just making it work, I wanted to focus on the logic and get it running quickly. I'm very much satisfied with the result I got, the server gives me no issues, it's just working.
One curious thing about select is that its result depends not only on the condition you pass but also on the container. If the container has Model fields that are not None, Norm will select the related rows in a single JOIN query giving you a fully populated model object. However, if the container has a none Model field, it is just ignored.
Do you mean nil? As in if you don't allocate the child refs then Norm ignores them. That's pretty elegant.
Do you mean nil? As in if you don't allocate the child refs then Norm ignores them. That's pretty elegant.
No, it's about Option type: Norm relies on Option to denote optional values.
However, it may as well work with nil-values, I'm afk now, can't check. Even if it doesn't work this way now, adding this feature is trivial. It's less explicit since the field type is not optional, but it's a viable option.
Currently, if you want to select just a slice of an object, use read-only models: https://norm.nim.town/models.html
I've used norm + prologue in tandem for a while now (I think I swapped from my python backend to nim around... 2022?), at the time there was jester whose DSL Syntax I despise as imo it was absolutely not readable, and prologue which had very explicit syntax for wiring routes together.
Prologue has pain points for larger files, namely for files beyond 50MB you'll get timeouts, which is why I swapped to uploading via my reverse proxy and then forwarding the request without the body but an identifier to find the file to prologue. It works decent enough for me, I run my server on a shared CPU with 1 single thread, so async works for me better than multithreading. My largest pain points are pretty much:
Beyond that, I heavily prefer its API for registering routes over any DSL I've seen so far.
As for norm, as a larger contributor to it, I like it a fair bit and use it with sqlite. It gives me an API for CRUD automatically and for the complex things I can hook in and write raw sql as needed. I use this for example to use Sqlites Full text search feature (FTS5) with norm, as that powers my webpages search feature.
I setup my prototype db with norm. Pretty easy!
Nice docs and the declaring nim types is nicer than the writing sql tables.
Unfortunately I've run into a few constraints that'll probably be blockers for me.
It looks like primary keys can't be set and I can't modify the ID primary key. Since I'm mostly interested in time series data I want to use a composite primary key using a device ID and the utc timestamp. Even just using utc time isn't doable. Which makes sense for a normal web app.