With the small exception that, as Hans mentioned, two cores will be of benefit because the graphics process can run on its own core.
the benefit is that minimal, that it's hardly worth mentioning ... just run your favorite patch and look at the used cpu time ... (for the patches that i tested, the cpu time used by the gui process is less than 0.1% of the time used by the kernel)
to make use of a multicore machine the only way to utilize all cores
is
to run several instances of pd, that are connected via jackdmp.
Now *there's* an idea. Would that really work? What would be the downside -- aside from the memory needed to run multiple copies of PD?
the problems are:
it is always the question, if you can manually split your dsp graph in a reasonable way ...
pd's (which is less efficient than nova's :) ... so using _many_ pd instances is probably a bad idea
for simple controls (OSC or netsend/receive) difficult for shared resources (buffers, busses)
I can imagine a very powerful modular system built on this model.
i somehow doubt, that i would make sense to use a jackdmp-style multicore scheduling algorithm for a max/pd/nova dsp graph, which can easily contain thousands of nodes (jack graphs are usually rather small), because of the scheduling overhead ...
however, i was thinking about ways to implement a hybrid system with automatic segmentation of the dsp graph into parallel dsp chains that can be scheduled with a dataflow algorithm ... but it would require lots of performance tests to tweak the heuristics of the graph segmentation ... for now, i had neither time nor funding ... (but maybe it is an interesting topic for my master thesis?)
tim
-- tim@klingt.org ICQ: 96771783 http://tim.klingt.org
Nothing exists until or unless it is observed. An artist is making something exist by observing it. And his hope for other people is that they will also make it exist by observing it. I call it 'creative observation.' Creative viewing. William S. Burroughs