Hello Pd community,
I've encountered an interesting behavior regarding CPU management in Pure
Data using [pd~], and I'd appreciate your insights or confirmations.
Here's my scenario:
I'm running a computationally "intensive" FFT analysis with sigmund~
(similar to the 17.partialtracer example, but without using data
structures), on a 4-second audio array. This analysis is causing "Audio I/O
Error" warnings and audible dropouts when executed within the main Pd
patch, DSP active, latency ~50ms... Increasing the latency further would
solve the problem, but I would like to find another solution.
Initially, I attempted to resolve this by offloading the computation into a
subprocess via [pd~]. However, despite what seemed to be a correct
configuration (no audio signals shared, only control messages exchanged,
-nogui option used), I still observed audio dropouts and "Audio I/O Error"
warnings in the main patch.
otherwise when running two completely separate instances of Pd manually,
that is two versions of Pd, 0.55-1 and 0.55-2 (one dedicated to real-time
audio and another solely for the FFT analysis, without using [pd~]), my
system (macOS Apple Silicon M3) handled both processes simultaneously
without any audio dropouts or errors.
This behavior suggests to me that perhaps the subprocess initiated by [pd~]
might not provide complete isolation at the OS level, potentially causing
implicit timing interactions leading to audio dropouts. Conversely,
manually running two independent Pd instances seems to solve the problem...
would this be the best way (using OSC)?
I'm curious if others have had similar experiences or can offer any
clarification or alternative configurations to better achieve isolation
within a single Pd instance.
Thanks very much in advance for your thoughts!
Best regards,
Pierpaolo Barbiero