Object Naming Service seems to have been mostly lost; objects are always named by their URL now instead of having an identity which may be placed at different locations.
Notation about object adapters is mostly undocumented; they went on about when a method was implemented by dedicated process, or a pool of workers, or just on-demand (what is now being called 'function as a service' was just a portable object adapter that is started up when needed. Or as I sardonically call them "autoscaling microservices.")
So in a REST stack the object identifier becomes the URL and the method name also becomes the URL. You POST to /posts/new and put the payload in there to pull off the method call. The object sort of blurs with the URL.
But for as complex as CORBA was accused of being ... it's basically the entire web stack as we know it but already put together in a huge documentation bundle. We've just replaced object identifiers with URIs, object notation with ... a dozen not-better alternatives (JSON, XML, msgpack, protobuf, thrift, probably more), replaced object adapters with servers and buzzwords ("my butt as a service webscale!")
Spent a few hours working on HTTP/2 support which is needed to do gRPC in Nim. Still haven't decided on which one I like better.
ZeroC Ice is a lot simplified from CORBA but it's still married to a specific concept of how objects work (ultimately Java-like objects on the network.) It has a lot of stuff about how to stack objects in such a way that their parents can still be accessed if child objects aren't recognized but then has a facet system to deal with the versioning of those objects not working out well at scale. I noticed with Thrift and gRPC they've done away with objects entirely and you are just talking to interfaces. Object details and inheritance and all that are completely out the window and there is just a Go-like model where an object speaks a set of functions in a context or it does not. That also fits with how Nim understands objects which is nice.
Current annoyances: Thrift compact is a little less elegant than Proto v3 i think. CBOR is my other favorite (no zigzag encoding but still has variable lengths and even has type coding including custom type codes if needed) but nothing uses it. Although while I was working on HTTP/2 penances it occured to me that the Thrift support is basically done already (the protocol of "just throw naked encoded objects at clients/servers" is valid) while gRPC has multiple layers of penances which might also be why Thrift is supported in tons of languages and gRPC in very few. Heck, just adding the THeader wrapper would probably put us at compatible with most of the existing Thrift servers. Although Thrift seems to only be an internal tool while gRPC is getting pushed outside (ex. Dropbox uses it for desktop clients, middleware like Envoy and Seaweedfs are using it.) Envoy has some basic support for proxying Thrift but hm.
Cap'n'proto comes up from time to time. I liked it but it's not fully specified. There are particulars with packing the structs in codegen that are not documented officially and upstream also doesn't want independent encoders out in the field.
I will probably end up implementing both (AFAIK we currently have neither.) It is rather nice how Thrift is a collection of building blocks (you decide if/what compression you need, crypto, etc, which is handy if you're doing something weird like a game engine.)
May look in to Avro support at some point. Already done CBOR and EBML and once Thrift and gRPC stuff is done I don't plan on working on that kind of interop systems anymore. These are more than enough.
Would be curious to hear anyone's opinions on Thrift and gRPC in particular.
Thanks for your thoughts about Cap 'n Proto
@ehmry has Preserves, and Syndicate that share some principles of Cap 'n Proto. But I still not get it.
Today I was reading about Aeron that uses SBE for encoding.
Maybe for my case (I need an IPC) iceoryx, but it seems to be used in eCAL that has IHC (Inter Host Communication), can could be used as Pub/Sub RPC.