On 22/02/14 06:28, Jonathan Wilkes wrote:
On 02/21/2014 06:41 AM, Simon Wise wrote:
Something to really make pd parallel would involve treating fan-outs as opportunities for the interpreter to launch each branch in a new thread, implementing the inherent parallelism in the dataflow paradigm (e.g. in the pd definition of fan-outs as being executed in undefined order). Here the trigger object is used to force sequential execution where required, just as it is now.
Practically speaking, it's completely different for control than for signal domain. For signal domain fanouts there's an understanding that Pd gets stuff done when it needs to get done. In the control domain, there's even a philosophy of _never_ having fanouts at all. I don't know what the effect would be of trying to auto-parallellize a signal diagram, but I'm pretty sure trying to auto-parallellize a control diagram wouldn't make much of a dent.
I was referring to parallelising using control fanouts only, but didn't make that clear. 'No fanouts, always use triggers' is a very sensible policy to avoid easily overlooked bugs when, as in pd, fanouts are just an implied trigger with an undefined order.
Certainly in many audio patches the messaging load is small compared to the dsp, except when you add lots of gui elements to the patch. For them parallelising the messaging like this would indeed be pointless since a 2 thread solution with all the control interface in one instance of pd and all the dsp in another with both launched together from a script as the 'app' or using [shell] to launch the dsp instance makes a lot of sense ... here there is an obvious split for a separate thread. Since many modern computers are multicore and the dsp is only running on as many threads as you can devise with [pd~]s there is still plenty of idle cpu. I believe other languages have addressed parallelising the dsp graph in a more automated way but in pd this is done explicitly. For a single core raspberry or such the dsp thread can be given a higher priority so at least the audio isn't interrupted by too much interaction with the interface.
However pd is used for much more than audio. Dataflow programming is inherently parallel but the implementation in pd comes from a single core history (well, a single messaging core controlling a separate dsp if you go back far enough) and is sequential. Hence the whole trigger <-> fanout discussion, in pd fanouts are not really dataflow fanouts at all, just ill-defined triggers. The implementation is a sequential depth first tree traversal and triggers make that explicit.
Even the dsp<->gui problem would be addressed by a proper dataflow implementation if it was done well. Keeping all the gui stuff in branches which don't have ~ objects should result in these branches being separate threads, and well implemented these would not be allowed to block ~ branches. Also splitting the dsp graph where ~ objects are in a distinct dataflow branch would make sense (there would need to be some decisions regarding exactly what distinct means in this context). A good implementation would follow the lead of other languages and wouldn't just create zillions of system threads to throw at the OS, but rather have a way of grouping them into a smaller number of ongoing system level processes. Writing and optimising this would be a huge project, and a patch run in a dataflow implementation would not behave in exactly the same way as it would in a sequential one, it couldn't.
But it is still an interesting thought experiment in the context of thinking about the future of pd in a world where a single thread sequential implementation is becoming increasingly problematic ... computers have been getting faster by adding cores rather than increasing clock speeds for some time now and that is not likely to change any time soon (quantum computing would be a whole new game and none of this would be relevant).
Simon