I once wrote such a toolset that does automatically scale up with multiple threads throughout the whole network. it worked by detecting cycles in the graph and splits of the signals while segmenting the graph in autonomous sequential parts and essentially adding some smart and lightweight locks everyhwere the signals split or merged. it even reassigned threats on the lock-level to "balance" the workload in the graph and preventing deadlocks. the code is/was around 2.5k lines of c++ code and a bloody mess :) so, i don't know much about the internals of pd but it'd be probably possible.
Could I see your code? I am not so literate with threading or scheduling, so I would like to see if I can read it and follow along with you.
detaching ffts (i.e. canvases with larger blocksizes than 64) should be rather trivial ...
distributing a synchronous dsp graph to several threads is not trivial, especially when it comes to a huge number of nodes. for small numbers of nodes the approach of jackdmp, using a dynamic dataflow scheduling, is probably usable, but when it comes to huge dsp graphs, the synchronization overhead is probably to big, so the graph would have to be split to parallel chunks which are then scheduled ...
This approach makes a lot of sense. A lot of parts of the dsp graph are "written" as parallel subroutines as shown.
true, i didn't try big graphs, so i can't really say how it would behave. it was more a fun project to see if it was doable. at that time i had the impression that the locking and the re-assignment of threads was quite efficient and done only on demand, if the graph has more sequential parts than the number of created threads ; i am curious how it can be achieved in a lock-free way.
Well, some kinds of serial processing could be made parallel.... What comes to mind is a topic in cognitive psychology. Early models assumed that processing was sequential, discrete, and serial. A hypothetical model of word recognition might include stages such as perception, encoding, and identification. But in fact, the processes proceed continuously and in parallel using partial information from preceding and following stages. Or another analogy, when playing arpeggios on guitar, you don't have to put all of your left fingers in place before playing the notes with the right hand. You only have to put on finger down at a time, before playing the corresponding string.
Timing without locks would be very tricky, and would be analogous to continuous processes. You could run into problems where not enough information is present for the next stage to run. Plus, there are some types of processing (like fft's) that rely on having the whole block in order to run.
about the issues of explicitely threading parts of the graph (that came up in the discussion lateron), i must say i don't get why you would want to do it. seeing how the numbers of cores are about to increase, i'd say that it is contraproductive in relation to the technological development of hardware and the software running on top of it lagging behind as well as the steady implicit maintenance of the software involved. from my point of view a graphical dataflow language has the perfect semantics to express the parallelisms of a program in an intuitive way. therefore i'd say that rather than adding constructs for explicit parallelism to the language that is able to express them anyhow adding constructs for explicit serialization of a process makes more sense. maybe i'm talking nonsense here, please correct me.
I thought that pdsend and pdrecieve could be used to run pd in a separate thread (a sub-process) and send data in between. What Mathieu suggested is a bit simpler, but is really the same, functionally.
Later, Chuck
so long... Niklas
PD-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list