A companion to protobufcore in the family of RPC primitives, this one supports the binary compact representation of Thrift.
https://github.com/IcedQuinn/thriftcore (gitea link somewhen)
Thrift is a little bonkers. There is a binary version, compact binary, and Twitter variant, which came about in part because Twitter wanted to inject more tracking data and Apache Solr complained about how inefficient the original encoding was. Compact borrows more from Protobuf in using the same zigzag encoding and variable length ints on the wire.
Thrift RPC is also not tied to HTTP/2's super-special TLS handshake. Although it can be if you want. It's more piecemeal with how you want to layer filters and connect blocks together where gRPC has made all the decisions for you.
Not sure which one I like better at the moment.
gRPC in nim has a bit more steps i'm afraid. it requires http/2 and encrypted http/2 requires some shenanigans and i only just got the first part of that landed upstream. there also needs to be an http/2 module to make use of it.
Thrift is a different protocol entirely and doesn't require that. It will work fine over HTTP/1.1 or websockets or just plain TCP. It's a little less popular but has a lot more language support.
implemented codecs for THeader today. this is the 'standard' multiplexing header which adds the ability to carry key/value headers and other things. Envoy supports pulling properties off the THeaders to make routing decisions for example.
In theory this means we should be able to talk with other Thrift providers albeit manually.
Cleaned up THeader a bit more today (removed code duplication.) Also found Twitter's own multiplexer Mux but there is a bit more to that so it will have to come later.
THeader is a single header that goes in front of requests and responses which carries some metadata for routing/multiplexing etc. Facebook seems to have come up with this in fbthrift and its found its way back to Apache.
Mux is what Twitter uses. Its more complicated since its a whole multiplexing layer that deals with named requests/responses and there's messages to encode/decode and pass around which carry the thrift code (though Mux can carry any other protocols too.) Took a look today but didn't keep going; have to do more digging and some codegen.
They use some very basic OO to deal with composing Thrift layers in a process. I immitated this with methods (the alternative is a struct full of function pointers but that seems worse?) and will keep it unless there is a good reason to not do this:
...
method write_message_begin*(self: ref Protocol; name: string, kind: TypeKind; sequence_number: uint32) {.base.} = raise new_exception(Defect, ENotImplemented)
method write_message_end* (self: ref Protocol) {.base.} = raise new_exception(Defect, ENotImplemented)
method write_struct_begin* (self: ref Protocol; name: string) {.base.} = raise new_exception(Defect, ENotImplemented)
method write_struct_end* (self: ref Protocol) {.base.} = raise new_exception(Defect, ENotImplemented)
method write_field_begin* (self: ref Protocol; name: string; kind: TypeKind, id: int) {.base.} = raise new_exception(Defect, ENotImplemented)
method write_field_end* (self: ref Protocol) {.base.} = raise new_exception(Defect, ENotImplemented)
...
Traditional Thrift has clients block until a server responds. There is no actual need for this (and some have gone to gRPC because of it) it's entirely valid to fire and forget or rely on some kind of async callback system.
Twitter also uses their own addon called Finagle which wraps calls to services in Futures. It shouldn't be too much of a problem to do something similar in Nim where service calls cooperate with async (as much as that still gets used.)
Not really a fan of this kind of macro abuse but I did poke around with something like this:
import macros
macro thrift_idl(spec: untyped): untyped =
dumptree:
type
ObjectNotExistException = object of Exception
service naming:
proc get_name(oid: int64 {1}): string {.throws: {1: ObjectNotExistException}.}
proc set_name(oid: int64 {1}; name: string {2}) {.throws: {1: ObjectNotExistException}.}
Syntax abuse seems to let you get away with writing the IDL in a macro. Doesn't help you generate the interface for other systems though.
Clocked in seven hours (may have been some distractions) and have the generic protocol object and method interface in. Also wrapped the compact encoding format with it. Will need to do the same with THeaders and get some test code going.
That leaves the IDL generator and the actual communicating over the network stack. Which is partly out of scope but will probably have to work something out anyway. Add some testing and that should do it for having an RPC framework in Nim :thumbsup:
everyone is deprecating their Thrift endpoints
Any know reason for that?
I'm looking for a fast "RPC for IPC", thrift has more languages support than Cap'n Proto, I saw that there is also FBThrift.
In my case others closing theirs endpoint wouldn't be a problem, but I would be afraid of lib future if nobody else are using it.
Maybe now that Cap'n Proto released version 1.0 it got more traction.
Related content: https://blog.cloudflare.com/scalable-machine-learning-at-cloudflare/
Any know reason for that?
Sure. People are only looking at the toolkits that were put out and not the protocols. Apache Thrift uses the old thread-per-client Java model and was never updated much. Whereas Facebook and Twitter moved on and implemented new servers using the async APIs. Some of this is now open, though Google also pushed GRPC with Go and more people have switched to that. Despite GRPC being significantly harder for the world to support--you have to have TLS-ALPN support in your libraries and most don't, except Google's.
Maybe now that Cap'n Proto released version 1.0 it got more traction.
Capnproto has a sharp weakness: the way fields are packed is not specified. So independent implementations cannot be formally compatible.
Thrift is a stackable pancake system. You basically have a visitor pattern where you ask objects to write themselves to a visitor. There are different visitors that output different formats (such as the CompactBinary format.) It's also possible to have visitors that ultimately output XML-RPC, or Protobuf, or anything else using the same Thrift pipeline. Also packaging your operations is sorta up to you also--there are some headers used by Facebook and Twitter or you can try to shove them in to the HTTP headers. It's not really that specified.
Connect-RPC is pretty light (though I don't think I ever wrote code for it) and plays well with HTTP servers, so that's something you could look at. Capnproto I would have supported but for the packing methodology being opaque (you can read the source code for the compiler, but it's not in the spec doc, and I refused to rely on doing crossbar tests against the upstream compiler as a validity check.)
Nim won't get buy-in on a project that way.
If you want to use Nim, you'll find a way, if you don't, you'll find excuses. And yes, there is always a gazillion of good reasons not use language X, especially if X isn't mainstream.
If you want to use Nim, you'll find a way, if you don't, you'll find excuses.
If I want to use Nim on personal projects, I do. If I'm on a team where the tech choice is somewhat open, then I have to convince others that my choice makes sense in the context of the project. That's what "buy-in" means. And yeah, there are a gazillion reasons not to use Nim. My point is that if gRPC support existed, then, for some projects, there'd be a gazillion - 1 reasons; IME Thrift/CapnProto/MsgPack/... support matters for a much smaller set of projects. I'm pretty sure you understood that; if not, I'll try again: right now, for any language, supporting gRPC has more value than supporting the less popular contenders. You might say "more value to me", but I wasn't the one noting that Thrift seems to be losing popularity.
Fair enough but your point felt like "and now not even C++ wrappers are good enough".
Well then I'm glad I rephrased it, because that was not my intention at all. Nim's ability to interoperate with C and C++, amongst others, is excellent, and a selling point of Nim. Just not so much in this specific case, IMO. If you're writing a wrapper for say, a tested cryptography library, that makes perfect sense to me to bind to a C one and just use it in Nim, and not wait for a Nim version.