And actually autodiff was my very first foray into Nim: https://github.com/mratsim/nim-rmad (This uses similar approach to Spacy: https://github.com/explosion/thinc, https://github.com/explosion/thinc/blob/master/thinc/layers/add.py#L42-L48 )
The author of https://github.com/autodiff/autodiff wanted to do an autodiff in Nim 4 years ago: https://forum.nim-lang.org/t/5157#32348
There may be some other stuff that I'm missing though. Generally I would pick the right tool for the job. Having a single library for ML and other autograd purposes is tricky. Forward mode autograd is often more applicable once you're not dealing with large neural networks.
I think the way it works in Julia and PyTorch environments is they run the code forwards once, write the derivatives to some kind of tape, and then use reflection services to walk the tape backwards and output the code to backpropagate changes.
That is a bit trickier to do in Nim.