I'm really happy to see this conversation.
On Fri, Jan 27, 2012 at 7:45 AM, Charles Henry czhenry@gmail.com wrote:
On Wed, Jan 25, 2012 at 5:32 PM, Peter Brinkmann peter.brinkmann@googlemail.com wrote:
I don't think users have anything to gain from fine-grained control of threads. That seems like an optimization hint that may or may not be helpful, depending on a lot of factors that are not obvious and will
differ
from machine to machine. In any case, I don't want to have to think
about
threads when patching any more than I want to think about, say, NEON optimizations.
I'm still making the case here: Suppose you're writing a patch and you run up against the limitations of a single-threaded process. Then, you take some portion in a sub-patch and drop in a "thread~" object. You're able to selectively add the functionality where it matters to you *and* only when you actually need it.
Isn't this problem addressed with the [pd~] object? It runs it's patches in it's own process instead of thread and I'm not sure why, but it will do what your describing, no?
The generalizable case is much more preferrable, I agree, but as you say further on, you might develop an application that incurs significant overhead--and may not be appropriate for all applications.
I see the next important step as making the general cases easier to handle. A per-thread context such as IOhannes and Peter describe above seems like the best approach to allowing a program to run multiple instances of pd in a much more predictable manner, while it still allows for backwards compatibility (via a default 'legacy' context). I see parallel processing as a different topic, although it will be easier to implement once the static variables are taken care of.