On Wed, Jan 25, 2012 at 11:46 AM, Peter Brinkmann peter.brinkmann@googlemail.com wrote:
Hi Chuck, Check out the early bits of this thread --- various use cases already came up along the way: http://lists.puredata.info/pipermail/pd-dev/2012-01/017992.html.%C2%A0 The short version is that libpd is being used in such a wide range of settings that you can come up with legitimate use cases for pretty much anything (single Pd instance shared between several threads, multiple Pd instances in one thread, and anything in between). At the level of the audio library, it's impossible to make good assumptions about threading.
Hi Peter
That's the part I really don't understand, and I don't really have a clear picture of how you want to be able to control/choose between those cases. I can also see how there could be more capabilities tied to having multiple threads generally. But specifically, I can't say. I have no clue.
I remember a conversation with IOhannes in August about multi-threading audio via sub-canvas user interface object (propose thread~ akin to block~). If all you're after is audio multi-threading--there's no need for multiple instances of Pd. Threads could be used to start a portion of the dsp chain, running asynchronously, and then join/synchronize with Pd when finished.
I don't think a patch is the place where decisions about threading should be made. Threading is an implementation detail that users shouldn't have to worry about, and besides, whether you have anything to gain from threading will depend on a number of factors that users won't necessarily be able to control or even know about.
I have a different view. Every sort of use for Pd is like writing a program--you should assume Pd users are writing programs with every sort of tool you give them--the flipside to having to control threading explicitly is that you get to control how finely grained the threading is. Putting it on the patching level is just the user interface--and it can work out nicely for grouping. Even if you have some automatic tools, you may still want to have explicit control through another available interface (e.g. for debugging).
What this would look like: Add a thread_prolog, thread_epilog, and thread_sync function. The thread_prolog function that occurs before block_prolog, starts a thread running the portion of dsp chain cointained within, and returns the pointer to the function following the thread_epilog. The thread_epilog function that occurs after block_epilog--waits for synchronization and returns.
What's the difficult part: You would need to have a good ordering of the dsp chain to take advantage of concurrency--each subcanvas having a thread~ object needs to kick off as early as possible, followed by objects that have no dependence on its output. Secondly, you'd need to put thread_sync on the dsp chain immediately before you will encounter functions with data dependencies.
I believe it's much simpler than that. It should be enough to just do a topological sort of the signal processing graph; that'll tell you which objects are ready to run at any given time, and then you can parallelize the invocation of their perform functions (or not, depending on how many processors are available). I don't think there's any need to explicitly synchronize much; tools like OpenMP should be able to handle this implicitly. Cheers, Peter
For that--the dspchain (an array of int*) makes a very bad structure. So, you'll want to re-write a handful of functions and data structures around having multiple concurrent branches of computations. I actually really like this problem :D I can picture a linked list of dspchains to do this. But... the description of the sort algorithm really will determine what the data structure ought to be.
Re-writing dsp_tick() is nearly sacrilege to me... beautiful bit of code there, but that would have to be done according to whatever you do to dspchain.
Chuck
On Wed, Jan 25, 2012 at 5:38 PM, Charles Henry czhenry@gmail.com wrote:
On Wed, Jan 25, 2012 at 11:46 AM, Peter Brinkmann peter.brinkmann@googlemail.com wrote:
Hi Chuck, Check out the early bits of this thread --- various use cases already
came
up along the way: http://lists.puredata.info/pipermail/pd-dev/2012-01/017992.html. The
short
version is that libpd is being used in such a wide range of settings that you can come up with legitimate use cases for pretty much anything
(single
Pd instance shared between several threads, multiple Pd instances in one thread, and anything in between). At the level of the audio library,
it's
impossible to make good assumptions about threading.
Hi Peter
That's the part I really don't understand, and I don't really have a clear picture of how you want to be able to control/choose between those cases.
Neither do I. That's sort of the point here ;) When it comes to libpd, all sorts of common assumptions go right out the window; a libpd-based application may not be interactive, or real-time, and it may not even run on any familiar hardware (there have been reports of libpd running on an embedded system, for instance). The upshot is that libpd should only be in the business of processing samples and exchanging messages with client code; all other decisions should be made at another level, where more specific information is available.
I can also see how there could be more capabilities tied to having multiple threads generally. But specifically, I can't say. I have no clue.
I remember a conversation with IOhannes in August about multi-threading audio via sub-canvas user interface object (propose thread~ akin to block~). If all you're after is audio multi-threading--there's no need for multiple instances of Pd. Threads could be used to start a portion of the dsp chain, running asynchronously, and then join/synchronize with Pd when finished.
I don't think a patch is the place where decisions about threading
should be
made. Threading is an implementation detail that users shouldn't have to worry about, and besides, whether you have anything to gain from
threading
will depend on a number of factors that users won't necessarily be able
to
control or even know about.
I have a different view. Every sort of use for Pd is like writing a program--you should assume Pd users are writing programs with every sort of tool you give them--the flipside to having to control threading explicitly is that you get to control how finely grained the threading is. Putting it on the patching level is just the user interface--and it can work out nicely for grouping. Even if you have some automatic tools, you may still want to have explicit control through another available interface (e.g. for debugging).
I don't think users have anything to gain from fine-grained control of threads. That seems like an optimization hint that may or may not be helpful, depending on a lot of factors that are not obvious and will differ from machine to machine. In any case, I don't want to have to think about threads when patching any more than I want to think about, say, NEON optimizations.
I believe it's much simpler than that. It should be enough to just do a topological sort of the signal processing graph; that'll tell you which objects are ready to run at any given time, and then you can parallelize
the
invocation of their perform functions (or not, depending on how many processors are available). I don't think there's any need to explicitly synchronize much; tools like OpenMP should be able to handle this implicitly. Cheers, Peter
For that--the dspchain (an array of int*) makes a very bad structure. So, you'll want to re-write a handful of functions and data structures around having multiple concurrent branches of computations. I actually really like this problem :D I can picture a linked list of dspchains to do this. But... the description of the sort algorithm really will determine what the data structure ought to be.
Hmm, I think that's a pretty standard scheduling problem. All the necessary information is already in Pd, and it's being used when dsp_chain is computed. It's just a matter of representing it as a dependency tree rather than a list, and then traversing the tree in the correct order.
The real question is whether there's anything to gain from this at all, or whether the overhead from parallelization will destroy any gains. I always remember the cautionary tale of a complex system that was carefully designed to work with an arbitrary number n of threads, until it was profiled and the designers found that it works best when n == 1. Cheers, Peter
On Wed, Jan 25, 2012 at 5:32 PM, Peter Brinkmann peter.brinkmann@googlemail.com wrote:
I don't think users have anything to gain from fine-grained control of threads. That seems like an optimization hint that may or may not be helpful, depending on a lot of factors that are not obvious and will differ from machine to machine. In any case, I don't want to have to think about threads when patching any more than I want to think about, say, NEON optimizations.
I'm still making the case here: Suppose you're writing a patch and you run up against the limitations of a single-threaded process. Then, you take some portion in a sub-patch and drop in a "thread~" object. You're able to selectively add the functionality where it matters to you *and* only when you actually need it.
The generalizable case is much more preferrable, I agree, but as you say further on, you might develop an application that incurs significant overhead--and may not be appropriate for all applications.
The design rationale for PdCUDA (in progress..grumble) is to expose the programming costs through benchmarks and measurement tools, while providing user-level control. It's a different sort of problem--where mixtures of different kinds of processors are concerned, the application being designed may not be appropriate for one kind (e.g. recursive filters on CUDA will run slowly, so just don't put them there!).
Again... my head's in audio. I'm still puzzling over the other ideas on topic.
I believe it's much simpler than that. It should be enough to just do a topological sort of the signal processing graph; that'll tell you which objects are ready to run at any given time, and then you can parallelize the invocation of their perform functions (or not, depending on how many processors are available). I don't think there's any need to explicitly synchronize much; tools like OpenMP should be able to handle this implicitly. Cheers, Peter
For that--the dspchain (an array of int*) makes a very bad structure. So, you'll want to re-write a handful of functions and data structures around having multiple concurrent branches of computations. I actually really like this problem :D I can picture a linked list of dspchains to do this. But... the description of the sort algorithm really will determine what the data structure ought to be.
Hmm, I think that's a pretty standard scheduling problem. All the necessary information is already in Pd, and it's being used when dsp_chain is computed. It's just a matter of representing it as a dependency tree rather than a list, and then traversing the tree in the correct order.
I'm going to expand on this a bit here (I think we're on the same page, but that just doesn't say enough about it).
For clarification/scope and correct me if I'm wrong here. The algorithm in d_ugen.c works this way within each dspcontext (see functions ugen_done_graph and ugen_doit): Create a list of all ugens in the current context. Loop over the list, and for all ugens with no inlet connections, call ugen_doit--which calls the "dsp" function and proceeds depth-first scheduling other connected ugens with no unfilled inlets. If any ugens aren't scheduled, throw an error, make sure outlet signals are new and set to zero in the parent context. Delete all the ugens in the graph.
The dependency information is currently just being used as an intermediate state while creating the dsp_chain structure (upon which there is no strict dependency information).
The tree structure needs to be generated in place of a single dsp_chain. It would just be made up of short dsp_chain structures, each one representing a depth-first branch (within which the dependency is essential--these are strictly serial portions, run in a single thread. Yay!-code from dsp_tick goes here!). Then, on a higher level, you dispatch/schedule those branches. At the higher level (the tree), you need to represent the dependencies between branches, which (at first guess) is a doubly-linked list with breadth-wise links being potentially concurrent.
hmm... I guess that won't work for block_prolog and epilog as written.... but it would be fine for most perform routines.
You wouldn't be using the dependency information in real-time to try to make decisions what to schedule--just generate the new data structure that tells how the program runs.
Now, that description neglects other concerns I know Miller has wrestled with--threadsafe delwrite/delread, tabwrite/tabread, throw/catch, more?
The real question is whether there's anything to gain from this at all, or whether the overhead from parallelization will destroy any gains. I always remember the cautionary tale of a complex system that was carefully designed to work with an arbitrary number n of threads, until it was profiled and the designers found that it works best when n == 1.
When talking about cluster computing, I had someone once ask: "Is that a case where the whole is greater than the sum of its parts?" "It's less. Always less."
Chuck
I'm really happy to see this conversation.
On Fri, Jan 27, 2012 at 7:45 AM, Charles Henry czhenry@gmail.com wrote:
On Wed, Jan 25, 2012 at 5:32 PM, Peter Brinkmann peter.brinkmann@googlemail.com wrote:
I don't think users have anything to gain from fine-grained control of threads. That seems like an optimization hint that may or may not be helpful, depending on a lot of factors that are not obvious and will
differ
from machine to machine. In any case, I don't want to have to think
about
threads when patching any more than I want to think about, say, NEON optimizations.
I'm still making the case here: Suppose you're writing a patch and you run up against the limitations of a single-threaded process. Then, you take some portion in a sub-patch and drop in a "thread~" object. You're able to selectively add the functionality where it matters to you *and* only when you actually need it.
Isn't this problem addressed with the [pd~] object? It runs it's patches in it's own process instead of thread and I'm not sure why, but it will do what your describing, no?
The generalizable case is much more preferrable, I agree, but as you say further on, you might develop an application that incurs significant overhead--and may not be appropriate for all applications.
I see the next important step as making the general cases easier to handle. A per-thread context such as IOhannes and Peter describe above seems like the best approach to allowing a program to run multiple instances of pd in a much more predictable manner, while it still allows for backwards compatibility (via a default 'legacy' context). I see parallel processing as a different topic, although it will be easier to implement once the static variables are taken care of.
I see the next important step as making the general cases easier to handle. A per-thread context such as IOhannes and Peter describe above seems like the best approach to allowing a program to run multiple instances of pd in a much more predictable manner, while it still allows for backwards compatibility (via a default 'legacy' context). I see parallel processing as a different topic, although it will be easier to implement once the static variables are taken care of.
Actually, I would sum up the thread slightly differently. We've touched three different topics: support for multiple instances of Pd, a potential refactoring of Pd on top of a library like libpd, and support for concurrency. As I see it, those three issues are largely orthogonal to one another. In particular, I'd rather not entangle multiple instances with multiple threads.
As far as libpd is concerned, the most important part is to achieve support for multiple instances. Tying instances to threads wouldn't be too helpful because there are lots of legitimate use cases where one thread needs multiple instances, as well as use cases where one instance is shared between threads.
The next step would be a refactoring of Pd, towards a more portable user interface. There's been an ongoing thread at Pd Everywhere on porting the UI to mobile devices ( http://noisepages.com/groups/pd-everywhere/forum/topic/cross-platform-mobile...), and I wrote up a few thoughts on my blog ( http://nettoyeur.noisepages.com/2012/01/refactoring-pure-data/).
Support for concurrency comes in third on my list. I already outlined most of my concerns in previous messages, and I also figure that this should be tabled until the other two problems have been solved. Cheers, Peter
Le 2012-01-26 à 14:45:00, Charles Henry a écrit :
When talking about cluster computing, I had someone once ask: "Is that a case where the whole is greater than the sum of its parts?" "It's less. Always less."
Depends on how you count it. You may also see it as a bunch of computers in which 0 computer can do task T in time N, but they can join together to form 1 (or more) computer(s) that can do task T in time N or less. In that sense, it's infinitely more powerful. This way of seeing it is much more important in realtime apps than in batch-compute-over-the-weekend apps.
It's like how one ninja turtle alone can't beat a certain evil monster, but with teamwork, they can. ;)
______________________________________________________________________ | Mathieu BOUCHARD ----- téléphone : +1.514.383.3801 ----- Montréal, QC
On 2/11/12, Mathieu Bouchard matju@artengine.ca wrote:
Le 2012-01-26 à 14:45:00, Charles Henry a écrit :
When talking about cluster computing, I had someone once ask: "Is that a case where the whole is greater than the sum of its parts?" "It's less. Always less."
Depends on how you count it. You may also see it as a bunch of computers in which 0 computer can do task T in time N, but they can join together to form 1 (or more) computer(s) that can do task T in time N or less. In that sense, it's infinitely more powerful. This way of seeing it is much more important in realtime apps than in batch-compute-over-the-weekend apps.
It's like how one ninja turtle alone can't beat a certain evil monster, but with teamwork, they can. ;)
You just always lose on efficiency whenever you use several threads or multiple nodes. Best case is "less than or equal" to the sum of its parts, and equal only when all the things you want to do are independent.
It's easy to see that potential for doing fast calculations building a cluster... and then get disappointed to see how much of it gets wasted. Look at that: a user just put 64 threads on one node and it spends all its time switching contexts. erm... the /home filesystem is where one user was just trying to write 500 output files at once, and no one has been able to login for hours.
I'll go back to wasting my time, and see if I can make it parallel ;)
Le 2012-02-13 à 21:15:00, Charles Henry a écrit :
On 2/11/12, Mathieu Bouchard matju@artengine.ca wrote:
Depends on how you count it. You may also see it as a bunch of computers in which 0 computer can do task T in time N, but they can join together to form 1 (or more) computer(s) that can do task T in time N or less. In that sense, it's infinitely more powerful. This way of seeing it is much more important in realtime apps than in batch-compute-over-the-weekend apps.
You just always lose on efficiency whenever you use several threads or multiple nodes.
Do you understand what I say, or you just repeat what I was replying to ?
If it's not going to be read, I may as well not write it.
______________________________________________________________________ | Mathieu BOUCHARD ----- téléphone : +1.514.383.3801 ----- Montréal, QC
On 2/13/12, Mathieu Bouchard matju@artengine.ca wrote:
Do you understand what I say, or you just repeat what I was replying to ?
I thought I understood--was there something I missed? The point of the original remark is that you always lose some of your potential computing power when trying to use multiple resources. You contrast with the capability of parallel computing to accomplish a certain amount of work in less time. I don't want to argue with you--these are just the two sides of the coin.
If it's not going to be read, I may as well not write it.
I had actually typed out and then deleted some more things about a successful project that accelerated computing time from weeks to hours, but I thought they were boring. And then I was late for a meeting with the same group I was about to write about! Succeeding at wasting my time indeed! -- Let's not make it parallel after all.
Le 2012-02-14 à 11:14:00, Charles Henry a écrit :
On 2/13/12, Mathieu Bouchard matju@artengine.ca wrote:
Do you understand what I say, or you just repeat what I was replying to ?
I thought I understood--was there something I missed?
I wouldn't have known that from your reply. It seemed to just continue where you had left. When I want to just add more to something I said, I just reply to my own mail. I don't quote someone's mail to say something completely unrelated to the quote.
The point of the original remark is that you always lose some of your potential computing power when trying to use multiple resources.
Part of the computing power has to be put on communication and coordination, which are essential parts of collaboration. It doesn't necessarily mean that those ressources could have been used for anything else. If you measure a programme's computing time without counting its I/O, it'll appear more efficient than if you counted the I/O. For multi-processor computing involving divide-and-conquer or pipelining, some of the work internal to the algorithm is I/O and you can't avoid measuring it. This might skew perception a bit.
You contrast with the capability of parallel computing to accomplish a certain amount of work in less time. I don't want to argue with you--these are just the two sides of the coin.
Yes, I was stating the other side, because there hadn't been any clarifications on what « the sum » and « the parts » ought to be in that case. People compare total amounts of gigaflops because they can, and because it's fairly simple to measure a score to determine a winner, which leads to competitions that take a lot of time and effort. This orients a lot of the default thinking about what is « the » goal of parallelism.
I had actually typed out and then deleted some more things
I also do that a lot. If I had kept them, I could make a book out of them, and store it in a random position in the Library of Babel.
about a successful project that accelerated computing time from weeks to hours, but I thought they were boring.
If you want to post it, post it, and if you don't want to, don't.
______________________________________________________________________ | Mathieu BOUCHARD ----- téléphone : +1.514.383.3801 ----- Montréal, QC
So, trying to get back to the original goal of this post, how about using a (uint)pd_context in lieu of static variables? :)
On Wed, Feb 15, 2012 at 5:09 AM, Mathieu Bouchard matju@artengine.cawrote:
Le 2012-02-14 à 11:14:00, Charles Henry a écrit :
The point of the original remark is that you always lose some of your
potential computing power when trying to use multiple resources.
Part of the computing power has to be put on communication and coordination, which are essential parts of collaboration.
I think it's something like a team developing on the same codebase; 2-3 people definitely gets the job done faster than 1, partly because it is easy to discuss your work (or in computing, synchronize). But 4, 10, or 60 people working on the same project, well that doesn't get you anything more than the first 3. At that point, you need to redesign into components (whether it's your project or code).
Anywho, Peter B. mentioned that pd is being used in so many different situations these days that you can't really make any assumptions about synchronization at the interface level (largely due to libpd, but there are many other projects that use the pd code base). There's almost always two required, real-time audio and main, and this is dealt with in many different ways depending on the platform / situation.
Cheers, Rich
Le 2012-02-16 à 01:06:00, Rich E a écrit :
I think it's something like a team developing on the same codebase; 2-3 people definitely gets the job done faster than 1, partly because it is easy to discuss your work (or in computing, synchronize).
There are few parallels to be made from teamwork of developers, especially nowadays. The problems solved by computers are of a different nature than those solved by those who program them. Problems tend to separate in two wildly different parts : the part that the computer does ; and the part of figuring out what the computer should be doing.
Often, the interactions between 2 people (or a few people) make a boost that is hard to have when working alone, because of the mental patterns involved, that often lead 2 people to catch more mistakes and do it more quickly than twice the speed of 1 person working alone. There is no such effect for parallelism within programmes.
But 4, 10, or 60 people working on the same project, well that doesn't get you anything more than the first 3. At that point, you need to redesign into components (whether it's your project or code).
Wait a minute, do you mean that the first 3 developers didn't design their programme into components in the first place ? What's a « component » here ?
But also, projects are not all created equal, components aren't created equal, teams are not created equal, and developers are all different from each other. It's hard to find something general to say about projects in general (though it's easy to pretend to have found something).
So, wild claims about 10 people not being better than 3 is some kind of fiction. I mean that although there are surely many cases in which it would be true, it largely depends on who you pick, in which kind of project, that has been structured in which way.
______________________________________________________________________ | Mathieu BOUCHARD ----- téléphone : +1.514.383.3801 ----- Montréal, QC
On Jan 25, 2012, at 6:32 PM, Peter Brinkmann wrote:
On Wed, Jan 25, 2012 at 5:38 PM, Charles Henry czhenry@gmail.com wrote: On Wed, Jan 25, 2012 at 11:46 AM, Peter Brinkmann peter.brinkmann@googlemail.com wrote:
Hi Chuck, Check out the early bits of this thread --- various use cases already came up along the way: http://lists.puredata.info/pipermail/pd-dev/2012-01/017992.html. The short version is that libpd is being used in such a wide range of settings that you can come up with legitimate use cases for pretty much anything (single Pd instance shared between several threads, multiple Pd instances in one thread, and anything in between). At the level of the audio library, it's impossible to make good assumptions about threading.
Hi Peter
That's the part I really don't understand, and I don't really have a clear picture of how you want to be able to control/choose between those cases.
Neither do I. That's sort of the point here ;) When it comes to libpd, all sorts of common assumptions go right out the window; a libpd-based application may not be interactive, or real-time, and it may not even run on any familiar hardware (there have been reports of libpd running on an embedded system, for instance). The upshot is that libpd should only be in the business of processing samples and exchanging messages with client code; all other decisions should be made at another level, where more specific information is available.
I can also see how there could be more capabilities tied to having multiple threads generally. But specifically, I can't say. I have no clue.
I remember a conversation with IOhannes in August about multi-threading audio via sub-canvas user interface object (propose thread~ akin to block~). If all you're after is audio multi-threading--there's no need for multiple instances of Pd. Threads could be used to start a portion of the dsp chain, running asynchronously, and then join/synchronize with Pd when finished.
I don't think a patch is the place where decisions about threading should be made. Threading is an implementation detail that users shouldn't have to worry about, and besides, whether you have anything to gain from threading will depend on a number of factors that users won't necessarily be able to control or even know about.
I have a different view. Every sort of use for Pd is like writing a program--you should assume Pd users are writing programs with every sort of tool you give them--the flipside to having to control threading explicitly is that you get to control how finely grained the threading is. Putting it on the patching level is just the user interface--and it can work out nicely for grouping. Even if you have some automatic tools, you may still want to have explicit control through another available interface (e.g. for debugging).
I don't think users have anything to gain from fine-grained control of threads. That seems like an optimization hint that may or may not be helpful, depending on a lot of factors that are not obvious and will differ from machine to machine. In any case, I don't want to have to think about threads when patching any more than I want to think about, say, NEON optimizations.
I believe it's much simpler than that. It should be enough to just do a topological sort of the signal processing graph; that'll tell you which objects are ready to run at any given time, and then you can parallelize the invocation of their perform functions (or not, depending on how many processors are available). I don't think there's any need to explicitly synchronize much; tools like OpenMP should be able to handle this implicitly. Cheers, Peter
For that--the dspchain (an array of int*) makes a very bad structure. So, you'll want to re-write a handful of functions and data structures around having multiple concurrent branches of computations. I actually really like this problem :D I can picture a linked list of dspchains to do this. But... the description of the sort algorithm really will determine what the data structure ought to be.
Hmm, I think that's a pretty standard scheduling problem. All the necessary information is already in Pd, and it's being used when dsp_chain is computed. It's just a matter of representing it as a dependency tree rather than a list, and then traversing the tree in the correct order.
The real question is whether there's anything to gain from this at all, or whether the overhead from parallelization will destroy any gains. I always remember the cautionary tale of a complex system that was carefully designed to work with an arbitrary number n of threads, until it was profiled and the designers found that it works best when n == 1.
What I see as the most difficult challenge of parallelizing Pd dsp processing is how to ensure it remains deterministic. That seems like it would add a lot of overhead. The pd~ model seems like a good compromise: it allows you to break up patches into sections that will run on separate execution units, while providing only basic attempts at guaranteeing deterministic execution. So that means, for example, your audio can be on one CPU, your video on another, yet each logical chunk is all running together as one process.
.hc
----------------------------------------------------------------------------
“We must become the change we want to see. - Mahatma Gandhi