Example:
import cborious
type Msg = object
greeting: string
value: int
let enc = toCbor(Msg(greeting: "hi", value: 42))
let dec = fromCbor(enc, Msg)
Feedback welcome—API ergonomics, tag coverage, and real‑world use cases especially!
The performance seems pretty decent:
# nim c -d:release -r tests/bench_cbor.nim
Benchmarking with iters=10000
cborious: one-shot size=41 bytes repr=@[164, 98, 105, 100, 24, 42, 100, 110, 97, 109, 101, 104, 78, 105, 109, 32, 85, 115, 101, 114, 102, 97, 99, 116, 105, 118, 101, 245, 102, 115, 99, 111, 114, 101, 115, 133, 1, 2, 3, 5, 8]
cbor_serialization: one-shot size=41 bytes repr=@[164, 98, 105, 100, 24, 42, 100, 110, 97, 109, 101, 104, 78, 105, 109, 32, 85, 115, 101, 114, 102, 97, 99, 116, 105, 118, 101, 245, 102, 115, 99, 111, 114, 101, 115, 133, 1, 2, 3, 5, 8]
cbor_em: one-shot size=41 bytes repr=@[164, 98, 105, 100, 24, 42, 100, 110, 97, 109, 101, 104, 78, 105, 109, 32, 85, 115, 101, 114, 102, 97, 99, 116, 105, 118, 101, 245, 102, 115, 99, 111, 114, 101, 115, 133, 1, 2, 3, 5, 8]
--- Results (encode + decode round-trip) ---
cborious: avg=61100 ns/op total=611 ms
cbor_serialization: avg=108131 ns/op total=1081 ms
cbor_em: avg=103738 ns/op total=1037 ms
cbor_serialization/cborious: 1.77x for 10000 iterations
cbor_em/cborious: 1.70x for 10000 iterations
# nim c -d:release -r "tests/bench_cbor_simple.nim"
Benchmarking with iters=10000
cborious: one-shot size=25 bytes repr=@[135, 24, 42, 24, 100, 245, 24, 100, 251, 64, 5, 191, 9, 149, 170, 247, 144, 250, 64, 73, 14, 86, 130, 1, 2]
cbor_serialization: one-shot size=88 bytes repr=@[167, 102, 70, 105, 101, 108, 100, 48, 24, 42, 102, 70, 105, 101, 108, 100, 49, 24, 100, 102, 70, 105, 101, 108, 100, 50, 245, 102, 70, 105, 101, 108, 100, 51, 24, 100, 102, 70, 105, 101, 108, 100, 52, 251, 64, 5, 191, 9, 149, 170, 247, 144, 102, 70, 105, 101, 108, 100, 53, 250, 64, 73, 14, 86, 102, 70, 105, 101, 108, 100, 54, 162, 102, 70, 105, 101, 108, 100, 48, 1, 102, 70, 105, 101, 108, 100, 49, 2]
cbor_em: one-shot size=25 bytes repr=@[135, 24, 42, 24, 100, 245, 24, 100, 251, 64, 5, 191, 9, 149, 170, 247, 144, 250, 64, 73, 14, 86, 130, 1, 2]
--- Results (encode + decode round-trip) ---
cborious: avg=30962 ns/op total=309 ms
cbor_serialization: avg=207246 ns/op total=2072 ms
cbor_em: avg=46876 ns/op total=468 ms
cbor_serialization/cborious: 6.69x for 10000 iterations
cbor_em/cborious: 1.51x for 10000 iterations
Dazzling, I remember when I started Rust, I tried to encode some data with cbor, to increase performance.
Found only discontinued serde_cbor, others were too complicated to use. I understand Rust developers implementing concise code, but you could write two classes of code. One with whatever custom, concise operations you want, and one for simple one-liner operation, like this library implements with toCbor(), fromCbor() 👍🏻👍🏻
That led me down a bit of a rabbit hole looking into Rust's CBOR world. It looks rough... The top cbor crates are archived, and the new ones look all over the place. All 3 Nim CBOR libs look to be more complete and easier to use. Obviously Cborious is the most complete! It implements self-describing CBOR ;)
Actually I quite enjoy that a few hours of prompting GPT5 on a side project can be so productive with Nim. Truth be told I was using making Cborious as a way to get over "writers block" in the mornings! Helps me overcome procrastination.