Yep! Well mostly depending on what you're doing. What sort of visualization are you wanting?
I've been looking into this for Figuro. The most generic form is doing vector rendering on GPUs which lets you implement standard GUI widgets. There's a lot of research out there for this, but seems to be few implementations being used day-to-day even by the big tech guys, which surprised me.
The best general vector renderer is Rive's renderer which uses a combination of tessellation geometry and texture shaders. They added feathering which lets them do shadows and glows on the GPU efficiently. There's no ability to do other fancy shader tricks AFAICT. It's also a fairly complex C++ project which would require a fair bit to setup to use as just a vector-based GUI renderer. It'd not be impossible though, and something I though of using for Figuro's backend renderer. Probably via a Nim/C++ wrapper library that could be compiled and used from Nim/C.
Given the complexity of pulling in a large C++ project I started down the path of using signed distance functions in Figuro which can be used on GPUs and are very simple to implement. They should also enable adding arbitrary shader effects efficiently.
I made an experiment with shady to draw GUI elements using signed distance functions in a shader. However setting up a texture buffer scheme is a bit of work and I'm not experience in OpenGL myself. Though it's more tedious than difficult.
Eventually I plan to add it to Figuro's backend renderer. It's partly why I recently moved to using SDF based GUI elements with SDFY. This will keep reduce the differences between the renderer backends.
However I'm busy with contracts at the moment so may be a few months before I get to try it.
I've been picking up a bit more OpenGL lately. What's stood out to me as limiting 2D GUI performance is doing masking (clipping) and layering effects. I'm not an expert by any means, but reading around a bit it seems there's some pretty big limits with GPUs + 2D GUI's still.
With OpenGL (and more modern APIs) you batch a bunch of drawing commands to the GPU along with texture data until you fill the draw buffer. Then you flush the commands and repeat until the frame is done.
As a first order approximation lots of large / full batches give you good rendering performance generally (avoiding really complex shaders, etc). This isn't too different from standard IO in other fields.
However masking or doing layering effects seems to require rendering a scene to a frame buffer then applying the filter or shader effects. However rendering to a frame buffer requires flushing the current batch of drawing commands, which is a real performance killer.
This means that if you're doing general GUI widgets which include clipping, shadows, glows, or other shader effects performance will tank. Just due to needing to do hundreds GPU syncs instead of a few dozens.
This seems to even effect Apple despite controlling the GPU hardware and API with Metal. The performance hit from their new "glass" effect is huge from my testing. I'm sure they'll optimize it more in the future but it's so slow on my M1 iPad. Maybe others can chime in but it seems to me they're also doing the glass effect over only a few "layers", I suspect partly to reduce the GPU flush-ing overhead.
My current wild idea for Figuro's backend is to do masking using SDF's as a standard part of drawing each GUI element. Say you're drawing a rounded box inside another widget which should only show half the rounded box since it goes outside the parent bounds and should be clipped. In this design you call the parent widgets SDF with it's inputs but only use it to decide if you discard the current pixel or continue to draw it.
If this design works efficiently, you could apply a custom shader to each widget, possibly including blurs and other effects without much overhead. I'm not 100% sure but it seems feasible from the research I've done.
There's no need to render to a frame buffer as the masking is handled in each GUI elements shader. It _should be efficient as you only need to calculate the mask for the pixels in the current element. Most masked regions in a GUI only tend to go 3-4 masks deep so I'm hoping it'll work in practice as well.
The big iffy part to the performance of this scheme would be the branching needed for both deciding to discard a pixel but also dynamically chasing your parent SDFs inputs and branching on each. You need to trace your parent SDFs and their inputs. So a rounded box SDF would need the XY center, the SDF Size, the four corners, and a index to any parent SDFs that mask it. That's about 8 floats each; maybe using int8's or int16's for some to save space. Unfortunately probably too large to pre-allocate say 4 masks for each element.
Apparently SDFs are used in games for shader mapping bounds so it may be feasible. Likely the question will be how it performes on mobile GPUs. But then if could fall back to a texture atlas method.
I was thinking of fast-changing audio waveforms- have a shader take every 1024th sample or so and draw a pixel. Similarly, filter curves, spectra, and various linear or circular displays.
The "pure GPU" idea is marketing speak from the Vital synthesizer, which is the best open source music application I've ever seen. But it's also possible that regular widgets are used with some GPU acceleration, trying to figure that out.
The screenshot is from Helm synth, vital's predecessor which is already insanely good, I've no idea why I missed it when it came out 10 years ago.
So my idea would be to have a Figuro GUI and then add a few dabs of OpenGL (or similar) for the curves.
This would be part of a toolkit approach to more easily and flexibly build synthesizers in Nim.
Nice! There's a lot of overlap here with some of my goals with Figuro which includes real-time graphing for IoT and sensor data, etc.
Note that Figuro inherits a sort of 2D graphics box from Fidget, but doesn't implement it currently. The idea seems similar to this JUCE Box2DRenderer. Everything is there if an enterprising soul wanted to take a stab at implementing it. The idea is to just send a list of lines, points, polygons and such without the full overhead of a full widget for each.
This in theory would enable drawing graphs of any sort like you show above using basic geometries.
I was thinking of fast-changing audio waveforms- have a shader take every 1024th sample or so and draw a pixel. Similarly, filter curves, spectra, and various linear or circular displays.
That would be cool. Especially if you want to do lots of different views of data. However for general plots like in the example above I believe that a standard 2D geometry plotter would be sufficient.
Figuro can run at 60FPS already which is the general limit for GUIs. Though in theory doing 120FPS would be harder and need a setup like you're saying.
Otherwise I'm not sure that sending and processing the audio on the GPU likely wouldn't gain too much. Though again, it'd be awesome.
One limit to my mind is that generally you still have to send the draw calls for each frame. That surprised me as I always imagined that games were uploading all the geometry and then just tweaking them on the fly. Perhaps game engines do stuff like that but your standard OpenGL doesn't seem to work that way.
The case where pure on-GPU data visualization helps is when you have millions of data points to plot and you want to interactively work with it. Something like Datashader.