On Wed, Jan 25, 2012 at 11:46 AM, Peter Brinkmann peter.brinkmann@googlemail.com wrote:
Hi Chuck, Check out the early bits of this thread --- various use cases already came up along the way: http://lists.puredata.info/pipermail/pd-dev/2012-01/017992.html.%C2%A0 The short version is that libpd is being used in such a wide range of settings that you can come up with legitimate use cases for pretty much anything (single Pd instance shared between several threads, multiple Pd instances in one thread, and anything in between). At the level of the audio library, it's impossible to make good assumptions about threading.
Hi Peter
That's the part I really don't understand, and I don't really have a clear picture of how you want to be able to control/choose between those cases. I can also see how there could be more capabilities tied to having multiple threads generally. But specifically, I can't say. I have no clue.
I remember a conversation with IOhannes in August about multi-threading audio via sub-canvas user interface object (propose thread~ akin to block~). If all you're after is audio multi-threading--there's no need for multiple instances of Pd. Threads could be used to start a portion of the dsp chain, running asynchronously, and then join/synchronize with Pd when finished.
I don't think a patch is the place where decisions about threading should be made. Threading is an implementation detail that users shouldn't have to worry about, and besides, whether you have anything to gain from threading will depend on a number of factors that users won't necessarily be able to control or even know about.
I have a different view. Every sort of use for Pd is like writing a program--you should assume Pd users are writing programs with every sort of tool you give them--the flipside to having to control threading explicitly is that you get to control how finely grained the threading is. Putting it on the patching level is just the user interface--and it can work out nicely for grouping. Even if you have some automatic tools, you may still want to have explicit control through another available interface (e.g. for debugging).
What this would look like: Add a thread_prolog, thread_epilog, and thread_sync function. The thread_prolog function that occurs before block_prolog, starts a thread running the portion of dsp chain cointained within, and returns the pointer to the function following the thread_epilog. The thread_epilog function that occurs after block_epilog--waits for synchronization and returns.
What's the difficult part: You would need to have a good ordering of the dsp chain to take advantage of concurrency--each subcanvas having a thread~ object needs to kick off as early as possible, followed by objects that have no dependence on its output. Secondly, you'd need to put thread_sync on the dsp chain immediately before you will encounter functions with data dependencies.
I believe it's much simpler than that. It should be enough to just do a topological sort of the signal processing graph; that'll tell you which objects are ready to run at any given time, and then you can parallelize the invocation of their perform functions (or not, depending on how many processors are available). I don't think there's any need to explicitly synchronize much; tools like OpenMP should be able to handle this implicitly. Cheers, Peter
For that--the dspchain (an array of int*) makes a very bad structure. So, you'll want to re-write a handful of functions and data structures around having multiple concurrent branches of computations. I actually really like this problem :D I can picture a linked list of dspchains to do this. But... the description of the sort algorithm really will determine what the data structure ought to be.
Re-writing dsp_tick() is nearly sacrilege to me... beautiful bit of code there, but that would have to be done according to whatever you do to dspchain.
Chuck