One use case of us requires interfacing ffmpeg. I used cpp to circumvent many of the c2nim limitations (e.g. macros within structs). However, I don't come to grips with type definitions that aren't declared top to bottom.
Specifically, I had to reorder many types in the avformat.nim file.
The diff between preprocessed .h files --> c2nim --> automatically generated .nim vs manually reordered nim files is on github. https://github.com/ahirner/nim-ffmpeg/compare/auto_only?expand=1#diff-7dc1342505fab6283c80ade1f70d8c78
An early version seems to work though. I just started using nim and love it!
Yes, the reordering can be much manual work.
For the gintro module, I do it automatically now, but it takes a few minutes to finish -- and it is based on gobject-introspection.
The ugly way is to put all types into a single type section at the front of the module. Many modules from early days like the gtk2 ones do that. But of course that is mud mixing, the resulting module is very unfriendly for the human observer. Someone recommended in the past to ship one mudmixed module for the compiler and one virgin one for humans. You may do that.
Or, you may ask Araq for the pragma name which allows forward declarations. I tested it one year ago, it was working fine indeed. But was not really advertised or recommended. Or, you may wait -- future compiler versions may support forward declarations out of the box.
What will you use it for?
If it's to manipulate video data, preprocess, trim, filter it (convolution, denoising, etc), it might be much easier to use ffms2
Or maybe provides bindings to Avisynth (Windows-only) or VapourSynth (Python lib with C++ backend)
If it's to play the video, I can't help you, but there are plenty of opensource video players.
Note: I plan "some time"™ to add a video load/save to Arraymancer to easily use, filter, reencode videos in deep learning pipelines (for face/object detection in videos for example) so would be willing to contribute/kickstart FFMPEG/FFMS <-> Nim bindings.
It's perfectly fine to collate all definitions into a single type block. I already handle includes programmatically. What would be an idiomatic way to post-process .nim files? My naive approach would be to parse the AST, reorder it and dump it. However, I'm not sure if nim parses a file that's invalid to begin with.
Thanks for outlining the best practices Stefan. I think the gobject approach is also very neat.
Hi mratsim, The libraries provide good medium level access. AFAIK they don't expose motion vectors though. So I'd either extend and depend them or go bare metal. I opted for the second choice (to also learn nim) and replicate this as a starter: https://github.com/vadimkantorov/mpegflow
Do you think wrapping ffms2 will be less effort than ffmpeg directly?
I'm working on realtime object detection professionally. Motion vectors from (hardware) encoders allow us to better extract objects. FWIW, calling ffmpeg as a subprocess is very practical for almost all other manipulation/streaming tasks we need. Personally, I think video processing with Arraymancer makes a great showcase.
Example to blur blocky area with occlusion mask
AVISource("c:\test.avi") # or MPEG2Source, DirectShowSource, some previous filter, etc
super = MSuper()
vectors = MAnalyse(super, isb = false)
compensation = MCompensate(super,vectors) # or use MFlow function here
# prepare blurred frame with some strong blur or deblock function:
blurred = compensation.DeBlock(quant=51) # Use DeBlock function here
badmask = MMask(vectors, kind = 2, ml=50)
overlay(compensation,blurred,mask=badmask) # or use faster MaskedMerge function of MaskTools
This can be fed directly to mencoder or x264 compiled with avs (Avisynth script) support. MPV can read directly from avs/vapoursynth so you can have a "REPL" for video files with vapoursynth/avisynth.
And last, port to Vapoursynth: