Time-aligning audio signals using free software

How to time-align (phase-align) audio tracks using free as in freedom, open source DAW and plugins.

It improves the sound quality and makes mixing & mastering easier if you have the same sound source recorded from different distances, or mixer’s line-out and room sound (captured onto a single multitrack recorder). I’ll assume that these are two separate tracks in your DAW.

I’m using the following software:

  1. Create a stereo bus for measurement.
  2. Send the tracks to the measurement bus, each track to one channel. If the tracks are stereo, downmix them to mono or send only one channel to the measurement bus.
  3. Insert highpass filter or parametric EQ on the bus, but set it to bypass for now.
  4. After the filter, insert measurement plugins: phase detector and phase correlation meter.
  5. Find a fragment in your session in which signals are likely to be most correlated, i.e. there are no outstanding sounds such as talking audience, or, in case of aligning microphone and line recording, instruments heard only from PA are playing (e.g. synths). Detection works best with noise (e.g. drum synths) and worst with steady notes of simple wave shapes.
  6. PFL or AFL the measurement bus
  7. Equalize the volumes of the tracks
  8. Open the phase detector. Set maximum time to maximum expected delay. 1 meter = approx. 3 milliseconds.
  9. Play the session and note detected delay.
    • If the “Worst” phase has a bigger “Value” than the “Best” most of the time, it means that polarity is inverted on one of the tracks; you can correct it using DAW’s polarity switches.
    • If the detected delay is jumping all over the scale, try enabling the highpass filter to prevent detector from being fooled by long bass notes. Start with 200Hz cutoff and experiment.
  10. Insert delay plugin on the track with least delay (captured closest to the sound source). Set its time to detected delay. Phase detector should report “Best” delay close to 0.
  11. Have a listen (to the measurement bus) and look at the phase correlation meter while fine tuning the delay time. Human brain is very good at detecting phase differences. If you have volume equalized and you’re hearing the sound shifted off the center (on headphones or properly set studio monitors), it means that the delay setting is inexact. If unsure, use phase correlation meter - the bigger correlation, the better.
  12. That’s all, mute the measurement bus and clear solo (PFL/AFL). Listen to the mixed tracks. If you’re confident about the result, disable the measurement bus to save some CPU cycles - correlation done by phase detector is computationally expensive.

Imperfect source codes: faders

I have written many programs to accomplish a specific task. I haven’t published them before because I thought they’re too specific and I should make them more generic… which isn’t going to happen soon because that’s a lot of work to polish a quick hack into a framework. However, the beauty of open source is that you can quickly hack my quick hack and make it useful in your use case, or clean it up and actually make it a serious thing (or I’ll do it when I retire). Whatever you do, it’s better than my work being wasted because of perfectionism.

faders-midi-osc-router started as x18faders when I wanted to control Behringer XR18 (X18 in rack case) using a MIDI controller. I also wanted the controller’s motorized fader to move on scene load or when mixer setting was changed from another device (tablet with X Air app). It was used during one concert of the band I play in - Maha Fixum (after which I understood that using a digital mixer doesn’t give me enough freedom in making experimental music - i.e. I needed to change source code each time we were repatching our studio signal flow)

Read more

Designing a signal processing environment (RFC) - part 1

This is the first post of the thinking out loud series about the signal processing framework and modular environment I’m developing - SignalDub.

Dubbing the Signals

What’s the functionality I need most when processing audio signals in-the-box (i.e. using a computer)? Dubbing. It has multiple meaning and all those are valid in this context:

I feel that not only music producers and sound engineers, but also visual artists, broadcast engineers… anyone working with multimedia, deserves such system allowing experimentation.

Free as in Freedom

That’s a pretty obvious requirement. You can’t have a hackable system if its core can’t be changed. Also, if the source code is kept secret, the knowledge used to develop it is wasted.

Extreme Modularity and Signal Loops

Most audio software today is modular. You can use whatever DAW you want with your soundcard, insert whatever plugin you want on the track in the DAW. But have you ever tried to insert an EQ on delay’s feedback path? Or distort it?

Read more