Designing a signal processing environment (RFC) - part 2
In these posts I’m thinking out loud how a software signal processing environment I’m developing should look like and behave. Check out part 1 if you haven’t seen it yet.
More on Signal Loops
There already exist open source modular audio processors, but none of them fulfils my requirements.
Most of them (e.g. BespokeSynth, Pure Data, JACK) have a restriction of processing audio in blocks of tens to thousands of samples (depending on how much overhead your system can tolerate and still work real-time). As long as you have evident dependency order in the graph, it is not a problem - you can run modules in that order and there will be no apparent delay other than the obvious block size delay present in all general-purpose-computer-based DSP systems.
However, if you have a signal loop, there’s no dependency order. You just have to feed some modules later. It’s equal to inserting a delay line with length of the block size, in the loop. So it will influence the sound! I’ve written about it in part 1.
VCV Rack/Cardinal works differently, sacrificing performance. It processes one sample at a time. Meaning that signal loops cause much shorter delays. They can be made even shorter when oversampling is enabled. But they still exist and still influence the sound.
Going down to zero latency in loops. New plugin API
I’ve came across description of this technique on Urs Heckmann’s (u-he) blog and haven’t tested it yet, but it looks promising. We can process the loop multiple times without advancing the clock, until the signals converge.
It requires a special plugin API, having separate compute and tick functions, and derivative function will also be helpful for our computations.
And processing in blocks is useless in that use case. It may, however, be used as a performance optimization, when there is no signal loop, or a delay within it is acceptable. But still, it doesn’t have to be part of the API. The one-sample process wrapped in a loop could be generated by the compiler. That’s what polymorphism (preferably with zero runtime cost) and templates in programming languages are for.
Overcoming overhead
If we want to call all the modules once per sample (or multiple times per sample until the signals converge), the function call overhead will outsize the real DSP work.
But what if we could generate the code with all the needed function calls needed to pass the signal through the whole graph, not just a single processor, and loaded it as a plugin (shared library)?
Firstly, our graph virtual machine overhead is eliminated - we don’t need to traverse the list of needed function calls, deference pointers and call them.
Secondly, optimizer within compiler will inline function calls, meaning much less jumps and much less useless work for the CPU. Only the actual computational work remains.
It all means that our compiled graph will have performance comparable to regular plugins we know today - LV2, VST etc.
And we have another reason why plugins should be distributed as a source code - otherwise inlining won’t work. The first reason was digital preservation.
Multi-sample-rate
Have you heard of oversampling? It’s necessary if we’re doing non-linear operations on the signal which has frequencies up to the Nyquist frequency. Otherwise we’ll get aliasing. Some plugins today do it. They interpolate (increase sample rate and filter) for internal processing and decimate (filter and decrease sample rate) when returning processed samples to the host. Then if we use another oversampling plugin, the process repeats. What a waste of CPU cycles! And sample rate conversion introduces latency! Couldn’t we run parts of the processing chain with different sample rate? Return to the base sample rate only when it makes sense, when all non-linear processing is done?
Exposing implicit parameters
Most plugins, especially the more music-making-oriented ones, need some parameters from host. Sample rate, tempo, meter, current position within session, tuning, key, scale. We normally don’t see them within host UI. They are passed implicitly to the plugin.
But what if we wanted to pitch shift particular group of instruments (modulate their tuning)? Or slow down only the drum machine (change the tempo)? We should be able to override signals going into that implicit inputs without affecting the whole session.
Cartridges and Slots
They are a high-level metaphor for a module that is inserted into another and the host controls its behavior. For example a compressor can have various compression curves. So compressor has a slot and the user can insert whether compression curve cartridge they want - bundled with the compressor or third-party, as long as it has a compatible pinout.
Any chances of forking?
Given that already existing apps have some of the features I need, could I use them as a base?
As of audio engine and modules, I don’t think so. Refactors of most modules would be needed to adapt to the new API supporting separate compute and tick functions.
As of frontend, BespokeSynth is my favourite. The UX is straightforward and the UI is intuitive.
Comments?
This is a Request for Comments so if you think you know how to implement some ideas I’ve written about, or improve the ideas, write about it:
drop an e-mail at comments at lumifaza dot org