Hello,
There would be no reason to mention it here, but it is product I started years ago on Rust, then migrated to Nim and was able finalize it on backend side. No AI was involved in backend/front logic, just for landing page and css styles.
Why I think it could be interesting: It fully on Nim:
I would appeciate if it would be interesting to click some buttons there and give some feedback that is good / what is bad on the app.
Thank you,
Pretty simple - you have to subscribe :)
Beside jokes, it is quite unique in that it works on L2 market data and has very detailed modeling compared to other competitors
Few thing I found in my blog I can repost here. The first I found is
Rust, refactoring
Everything seemed to be working well until I connected extern crate pyo3. Of course, everything can't be simple in Rust - I can't wrap a structure with pyo3 macros with lifetimes => a global refactoring has begun to purge a fairly central structure from lifetimes => everything is covered with a thick layer of Rc/Arc/RefCell, so thick that you want to start writing in Swift.
something like this: self.borrow().control.borrow().stats.borrow_mut().counter += 1;
what a nice code :( At the same time, this essentially kills the entire lifetime concept, since each borrow is your personal and manual control over the rest.
#rust #old
Rust - again
I have a pet project that has been going on for over a year and is gradually coming to a final chord.
At some point, it was decided to change the way the client's data was stored to simple files => some kind of data was downloaded from the database for this, everything worked almost perfectly - the data was swinging chunk by chunk, the progress bar showed how much was left. Except for one thing - the data is not compressed and there is a lot of it - to give it away in a stream - to remain without Content-Length - the user will not know how much is left. To compress a lot at once, it takes the user a very long time to wait for this Content-Length.
It was decided to simply collect 1000, compress and give away - everything worked again, but something was strange - sometimes some chunks came broken. After looking at the packages, it became clear that the packages were normal, but hyper's way of processing them was this: if the chunk didn't fit into the buffer (8kb), then output the chunk with what it was, and the rest as if it were another chunk. I cannot judge how correct this behavior is, but it is also possible that it is correct. I wanted to move to web sockets, and then, actix-web, which seemed normal as http, showed that the structure for ws should be done like this: Client<SinkWrite<Message, SplitSink<Framed<T, Codec>, Message>>> - simple and convenient - all the guts are visible, otherwise, you never know if you will need to extract the codec from client.0.0.0.
Next, you need to describe the actor, then the Handler, then another StreamHandler, into which you need to transfer the mut context of this client, after which you can pull ctx.0.write methods or wrappers from it with your hands. But this will not work with future streams, because when creating this client, you need to call a mutable add_stream on it, in which you link the Stream and the context - apparently otherwise it will not know that you need to pull this handler, and even if you perform all these actions, it is only for processing the output stream from the stream - how to write to it from another stream without your impl Stream on top will remain a mystery.
In general, this made me feel unwilling to use ws. Knowledgeable people said that you just need to wind up the length_delimited codec on http chunks and I'll have normal data frames - I'll try this way. Codecs work via AsyncRead, and hyper/actix output a Stream<Item=Vec<u8>> - although it's generally clear that the result of their work is the same in this case, and only in futures 0.3 did they make the into_async_read() method, which allows you to convert one abstraction into another => you can just write in codec - hooray!, but no, not hooray, tokio 0.2 for some reason, it has its own AsyncRead, which for some reason are not compatible with futures 0.3, and others will not work for me. There is a kind of glue - compat, but experienced people said that it is still more reliable to disassemble the bytes with your hands than to wade through the page of some error that compat() will show. Without compat(), errors are already half a page long and not much easier to read.
Let's go back to what I did not expect in any way - that I could get stuck for several weeks on a simple file download in chunks, and rust, once again, as they say, breaks through the bottom in the productivity of writing something other than the examples from the examples folder (by the way, this is often not collected). And I have many such cases that could be told no less - but the first time I decided to write.
#rust #old