<?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet type="text/xsl" href="assets/xml/rss.xsl" media="all"?><rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>LUMIFAZA</title><link>https://info.lumifaza.org/</link><description>audiovisual research | party engineering</description><atom:link href="https://info.lumifaza.org/rss.xml" rel="self" type="application/rss+xml"></atom:link><language>en</language><copyright>Contents © 2025 &lt;a href="mailto:teo@lumifaza.org"&gt;Teo&lt;/a&gt; </copyright><lastBuildDate>Sun, 14 Dec 2025 21:08:04 GMT</lastBuildDate><generator>Nikola (getnikola.com)</generator><docs>http://blogs.law.harvard.edu/tech/rss</docs><item><title>Importing server using initramfs shell</title><link>https://info.lumifaza.org/2024/10/importing-server-using-initramfs-shell/</link><dc:creator>Teo</dc:creator><description>&lt;div&gt;&lt;p&gt;When all you have is a remote KVM (&lt;a href="https://en.wikipedia.org/wiki/KVM_switch#KVM_over_IP_(IPKVM)"&gt;this&lt;/a&gt;,
not &lt;a href="https://en.wikipedia.org/wiki/Kernel-based_Virtual_Machine"&gt;this&lt;/a&gt;),
management console doesn’t have image import/export feature, and you
need to migrate a VPS.&lt;/p&gt;
&lt;p&gt;Fortunately, during usual Linux boot process, there is a stage when
it is not dependent on the OS partition because we’re running from RAM -
logically enough it is called &lt;strong&gt;initramfs&lt;/strong&gt;. Let’s use it
to import the image!&lt;/p&gt;
&lt;h3 id="on-donor"&gt;On donor:&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;p&gt;open port on firewall, ideally only to acceptor’s IP.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;freeze all filesystems mounted on local disk:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;mount / -o remount,ro&lt;/code&gt; - do for each partition&lt;/li&gt;
&lt;li&gt;or&lt;/li&gt;
&lt;li&gt;&lt;code&gt;echo u &amp;gt; /proc/sysrq-trigger&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;start fake HTTP server:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;code&gt;(printf 'HTTP/1.0 200 OK\r\n\r\n'; cat /dev/sda) | pv | nc -l -p 1234&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;a href="https://info.lumifaza.org/2024/10/importing-server-using-initramfs-shell/"&gt;Read more…&lt;/a&gt; (1 min remaining to read)&lt;/p&gt;&lt;/div&gt;</description><category>Linux</category><guid>https://info.lumifaza.org/2024/10/importing-server-using-initramfs-shell/</guid><pubDate>Tue, 08 Oct 2024 11:58:33 GMT</pubDate></item><item><title>Designing a signal processing environment (RFC) - part 2</title><link>https://info.lumifaza.org/2024/07/designing-a-signal-processing-environment-rfc-part-2/</link><dc:creator>Teo</dc:creator><description>&lt;div&gt;&lt;p&gt;In these posts I’m thinking out loud how a software signal processing
environment I’m developing should look like and behave. Check out &lt;a href="https://info.lumifaza.org/2022/09/Designing-a-signal-processing-environment-RFC-part-1/"&gt;part
1&lt;/a&gt; if you haven’t seen it yet.&lt;/p&gt;
&lt;h2 id="more-on-signal-loops"&gt;More on Signal Loops&lt;/h2&gt;
&lt;p&gt;There already exist open source modular audio processors, but none of
them fulfils my requirements.&lt;/p&gt;
&lt;p&gt;Most of them (e.g. &lt;a href="https://www.bespokesynth.com/"&gt;BespokeSynth&lt;/a&gt;, &lt;a href="https://puredata.info/"&gt;Pure Data&lt;/a&gt;, &lt;a href="https://jackaudio.org/"&gt;JACK&lt;/a&gt;) have a restriction of processing
audio in blocks of tens to thousands of samples (depending on how much
overhead your system can tolerate and still work real-time). As long as
you have evident dependency order in the graph, it is not a problem -
you can run modules in that order and there will be no apparent delay
other than the obvious block size delay present in all
general-purpose-computer-based DSP systems.&lt;/p&gt;
&lt;p&gt;However, if you have a signal loop, there’s no dependency order. You
just have to feed some modules later. It’s equal to inserting a delay
line with length of the block size, in the loop. So it will influence
the sound! &lt;a href="https://info.lumifaza.org/2022/09/Designing-a-signal-processing-environment-RFC-part-1/#extreme-modularity-and-signal-loops"&gt;I’ve
written about it in part 1&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://vcvrack.com/"&gt;VCV Rack&lt;/a&gt;/&lt;a href="https://github.com/DISTRHO/Cardinal"&gt;Cardinal&lt;/a&gt; works
differently, sacrificing performance. It processes one sample at a time.
Meaning that signal loops cause much shorter delays. They can be made
even shorter when oversampling is enabled. But they still exist and
still influence the sound.&lt;/p&gt;
&lt;h2 id="going-down-to-zero-latency-in-loops.-new-plugin-api"&gt;Going down
to zero latency in loops. New plugin API&lt;/h2&gt;
&lt;p&gt;I’ve came across &lt;a href="https://urs.silvrback.com/zero-delay-feedback"&gt;description of this
technique&lt;/a&gt; on &lt;a href="https://urs.silvrback.com/"&gt;Urs Heckmann’s
(u-he) blog&lt;/a&gt; and haven’t tested it yet, but it looks promising. We
can process the loop multiple times without advancing the clock, until
the signals converge.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://info.lumifaza.org/2024/07/designing-a-signal-processing-environment-rfc-part-2/"&gt;Read more…&lt;/a&gt; (2 min remaining to read)&lt;/p&gt;&lt;/div&gt;</description><category>audio</category><category>blog</category><category>programming</category><category>signaldub</category><category>thinking</category><guid>https://info.lumifaza.org/2024/07/designing-a-signal-processing-environment-rfc-part-2/</guid><pubDate>Fri, 05 Jul 2024 17:33:33 GMT</pubDate></item><item><title>Time-aligning audio signals using free software</title><link>https://info.lumifaza.org/2023/11/Delay-aligning-audio-signals-using-free-software/</link><dc:creator>Teo</dc:creator><description>&lt;div&gt;&lt;p&gt;How to time-align (phase-align) audio tracks using free as in
freedom, open source DAW and plugins.&lt;/p&gt;
&lt;p&gt;It improves the sound quality and makes mixing &amp;amp; mastering easier
if you have the same sound source recorded from different distances, or
mixer’s line-out and room sound (captured onto a single multitrack
recorder). I’ll assume that these are two separate tracks in your
DAW.&lt;/p&gt;
&lt;p&gt;I’m using the following software:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;DAW - &lt;a href="https://ardour.org"&gt;Ardour&lt;/a&gt; (any DAW with flexible
routing and sample-accurate playback will work)&lt;p&gt;&lt;a href="https://info.lumifaza.org/2023/11/Delay-aligning-audio-signals-using-free-software/"&gt;Read more…&lt;/a&gt; (2 min remaining to read)&lt;/p&gt;&lt;/li&gt;&lt;/ul&gt;&lt;/div&gt;</description><category>Ardour</category><category>audio</category><category>FOSS</category><category>linux audio</category><category>LSP plugins</category><category>tutorial</category><category>x42 plugins</category><guid>https://info.lumifaza.org/2023/11/Delay-aligning-audio-signals-using-free-software/</guid><pubDate>Tue, 07 Nov 2023 21:11:28 GMT</pubDate></item><item><title>Imperfect source codes: faders</title><link>https://info.lumifaza.org/2022/11/Imperfect-source-codes-faders/</link><dc:creator>Teo</dc:creator><description>&lt;div&gt;&lt;p&gt;I have written many programs to accomplish a specific task. I haven’t
published them before because I thought they’re too specific and I
should make them more generic… which isn’t going to happen soon because
that’s a lot of work to polish a quick hack into a framework. However,
the beauty of open source is that you can quickly hack my quick hack and
make it useful in your use case, or clean it up and actually make it a
serious thing (or I’ll do it when I retire). Whatever you do, it’s
better than my work being wasted because of perfectionism.&lt;/p&gt;
&lt;p&gt;&lt;a href="https://gitlab.com/teodly/faders-midi-osc-router"&gt;&lt;code&gt;faders-midi-osc-router&lt;/code&gt;&lt;/a&gt;
started as &lt;code&gt;x18faders&lt;/code&gt; when I wanted to control Behringer
XR18 using a MIDI controller. I also wanted the controller’s motorized
fader to move on scene load or when mixer setting was changed from
another device (tablet with X Air app). It was used during one concert
of the band I play in - &lt;a href="https://mahafixum.parasite.studio/"&gt;Maha Fixum&lt;/a&gt; (after which I
understood that using a digital mixer doesn’t give me enough freedom in
making experimental music - i.e. I needed to change source code each
time we were repatching our studio signal flow)&lt;/p&gt;
&lt;p&gt;&lt;a href="https://info.lumifaza.org/2022/11/Imperfect-source-codes-faders/"&gt;Read more…&lt;/a&gt; (1 min remaining to read)&lt;/p&gt;&lt;/div&gt;</description><category>blog</category><category>imperfection</category><category>MIDI</category><category>OSC</category><category>programming</category><category>release</category><guid>https://info.lumifaza.org/2022/11/Imperfect-source-codes-faders/</guid><pubDate>Sun, 20 Nov 2022 10:00:00 GMT</pubDate></item><item><title>Designing a signal processing environment (RFC) - part 1</title><link>https://info.lumifaza.org/2022/09/Designing-a-signal-processing-environment-RFC-part-1/</link><dc:creator>Teo</dc:creator><description>&lt;div&gt;&lt;p&gt;This is the first post of the thinking out loud series about the
signal processing framework and modular environment I’m developing -
SignalDub.&lt;/p&gt;
&lt;h2 id="dubbing-the-signals"&gt;Dubbing the Signals&lt;/h2&gt;
&lt;p&gt;What’s the functionality I need most when processing audio signals
in-the-box (i.e. using a computer)? Dubbing. It has multiple meaning and
all those are valid in this context:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;recording&lt;/li&gt;
&lt;li&gt;adding effects in creative way, using them as an instrument (delay
and reverb in particular)&lt;/li&gt;
&lt;li&gt;experimenting, using devices &lt;a href="http://www.interruptor.ch/dub_fx.shtml"&gt;not-always-the-way-they-were-designed&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;I feel that not only music producers and sound engineers, but also
visual artists, broadcast engineers… anyone working with multimedia,
deserves such system allowing experimentation.&lt;/p&gt;
&lt;h2 id="free-as-in-freedom"&gt;Free as in Freedom&lt;/h2&gt;
&lt;p&gt;That’s a pretty obvious requirement. You can’t have a hackable system
if its core can’t be changed. Also, if the source code is kept secret,
the knowledge used to develop it is wasted.&lt;/p&gt;
&lt;h2 id="extreme-modularity-and-signal-loops"&gt;Extreme Modularity and
Signal Loops&lt;/h2&gt;
&lt;p&gt;Most audio software today is modular. You can use whatever DAW you
want with your soundcard, insert whatever plugin you want on the track
in the DAW. But have you ever tried to insert an EQ on delay’s feedback
path? Or distort it?&lt;/p&gt;
&lt;p&gt;&lt;a href="https://info.lumifaza.org/2022/09/Designing-a-signal-processing-environment-RFC-part-1/"&gt;Read more…&lt;/a&gt; (6 min remaining to read)&lt;/p&gt;&lt;/div&gt;</description><category>audio</category><category>blog</category><category>programming</category><category>signaldub</category><category>thinking</category><guid>https://info.lumifaza.org/2022/09/Designing-a-signal-processing-environment-RFC-part-1/</guid><pubDate>Fri, 30 Sep 2022 13:29:48 GMT</pubDate></item></channel></rss>