Time-aligning audio signals using free software

How to time-align (phase-align) audio tracks using free as in freedom, open source DAW and plugins.

It improves the sound quality and makes mixing & mastering easier if you have the same sound source recorded from different distances, or mixer’s line-out and room sound (captured onto a single multitrack recorder). I’ll assume that these are two separate tracks in your DAW.

I’m using the following software:

  1. Create a stereo bus for measurement.
  2. Send the tracks to the measurement bus, each track to one channel. If the tracks are stereo, downmix them to mono or send only one channel to the measurement bus.
  3. Insert highpass filter or parametric EQ on the bus, but set it to bypass for now.
  4. After the filter, insert measurement plugins: phase detector and phase correlation meter.
  5. Find a fragment in your session in which signals are likely to be most correlated, i.e. there are no outstanding sounds such as talking audience, or, in case of aligning microphone and line recording, instruments heard only from PA are playing (e.g. synths). Detection works best with noise (e.g. drum synths) and worst with steady notes of simple wave shapes.
  6. PFL or AFL the measurement bus
  7. Equalize the volumes of the tracks
  8. Open the phase detector. Set maximum time to maximum expected delay. 1 meter = approx. 3 milliseconds.
  9. Play the session and note detected delay.
    • If the “Worst” phase has a bigger “Value” than the “Best” most of the time, it means that polarity is inverted on one of the tracks; you can correct it using DAW’s polarity switches.
    • If the detected delay is jumping all over the scale, try enabling the highpass filter to prevent detector from being fooled by long bass notes. Start with 200Hz cutoff and experiment.
  10. Insert delay plugin on the track with least delay (captured closest to the sound source). Set its time to detected delay. Phase detector should report “Best” delay close to 0.
  11. Have a listen (to the measurement bus) and look at the phase correlation meter while fine tuning the delay time. Human brain is very good at detecting phase differences. If you have volume equalized and you’re hearing the sound shifted off the center (on headphones or properly set studio monitors), it means that the delay setting is inexact. If unsure, use phase correlation meter - the bigger correlation, the better.
  12. That’s all, mute the measurement bus and clear solo (PFL/AFL). Listen to the mixed tracks. If you’re confident about the result, disable the measurement bus to save some CPU cycles - correlation done by phase detector is computationally expensive.

Designing a signal processing environment (RFC) - part 1

This is the first post of the thinking out loud series about the signal processing framework and modular environment I’m developing - SignalDub.

Dubbing the Signals

What’s the functionality I need most when processing audio signals in-the-box (i.e. using a computer)? Dubbing. It has multiple meaning and all those are valid in this context:

I feel that not only music producers and sound engineers, but also visual artists, broadcast engineers… anyone working with multimedia, deserves such system allowing experimentation.

Free as in Freedom

That’s a pretty obvious requirement. You can’t have a hackable system if its core can’t be changed. Also, if the source code is kept secret, the knowledge used to develop it is wasted.

Extreme Modularity and Signal Loops

Most audio software today is modular. You can use whatever DAW you want with your soundcard, insert whatever plugin you want on the track in the DAW. But have you ever tried to insert an EQ on delay’s feedback path? Or distort it?

Read more