Hi!
Anyone know if PD is aided by SMP? I'm throwing together a new box
in which I will install UbuntuStudio with the intention of using it
as a live processing rig for live performance using PD. I'm
interested in discovering whether it makes more sense to get a Core 2
Quad processor that runs with a bus speed of 1066 or a Core 2 Duo
processor that runs with a bus speed of 1333. The idea is to reduce
latency as much as possible. I'll get I/O via a RME 9652 and a
couple Behringer ADA8000 units. Also, any thoughts on how ram speed
plays into this equation? I'm looking at the Intel dg33fb board.
Matthew Polashek matt@tinysongs.com www.tinysongs.com www.JandMJazz.com www.BLDBand.com
The DSP and the GUI run in separate processes, so there is some
benefit. But the DSP is all in one process, so that will always run
on one CPU. AFAIK, Gem processing will also be included in that one
process, while PDP has the ability to split the processing out into a
separate thread.
.hc
On Aug 21, 2007, at 10:02 AM, Matthew Polashek wrote:
Hi!
Anyone know if PD is aided by SMP? I'm throwing together a new box
in which I will install UbuntuStudio with the intention of using it
as a live processing rig for live performance using PD. I'm
interested in discovering whether it makes more sense to get a Core
2 Quad processor that runs with a bus speed of 1066 or a Core 2 Duo
processor that runs with a bus speed of 1333. The idea is to
reduce latency as much as possible. I'll get I/O via a RME 9652
and a couple Behringer ADA8000 units. Also, any thoughts on how
ram speed plays into this equation? I'm looking at the Intel
dg33fb board.Matthew Polashek matt@tinysongs.com www.tinysongs.com www.JandMJazz.com www.BLDBand.com
PD-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/ listinfo/pd-list
You can't steal a gift. Bird gave the world his music, and if you can
hear it, you can have it. - Dizzy Gillespie
Ok, that's great information! Thanks! Look like 2 cores are more
useful than one so far.
Matt
Matthew Polashek matt@tinysongs.com www.tinysongs.com www.JandMJazz.com www.bldband.com
On Aug 21, 2007, at 12:17 PM, Hans-Christoph Steiner wrote:
The DSP and the GUI run in separate processes, so there is some
benefit. But the DSP is all in one process, so that will always
run on one CPU. AFAIK, Gem processing will also be included in
that one process, while PDP has the ability to split the processing
out into a separate thread..hc
On Aug 21, 2007, at 10:02 AM, Matthew Polashek wrote:
Hi!
Anyone know if PD is aided by SMP? I'm throwing together a new
box in which I will install UbuntuStudio with the intention of
using it as a live processing rig for live performance using PD.
I'm interested in discovering whether it makes more sense to get a
Core 2 Quad processor that runs with a bus speed of 1066 or a Core
2 Duo processor that runs with a bus speed of 1333. The idea is
to reduce latency as much as possible. I'll get I/O via a RME
9652 and a couple Behringer ADA8000 units. Also, any thoughts on
how ram speed plays into this equation? I'm looking at the Intel
dg33fb board.Matthew Polashek matt@tinysongs.com www.tinysongs.com www.JandMJazz.com www.BLDBand.com
PD-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/ listinfo/pd-list
You can't steal a gift. Bird gave the world his music, and if you
can hear it, you can have it. - Dizzy Gillespie
Anyone know if PD is aided by SMP?
basically no ...
I'm throwing together a new box in which I will install UbuntuStudio with the intention of using it as a live processing rig for live performance using PD. I'm interested in discovering whether it makes more sense to get a Core 2 Quad processor that runs with a bus speed of 1066 or a Core 2 Duo processor that runs with a bus speed of 1333.
to make use of a multicore machine the only way to utilize all cores is to run several instances of pd, that are connected via jackdmp.
Also, any thoughts on how ram speed plays into this equation?
that depends on your synthesis algorithms ... if you do lots of table-based synthesis (sampling, delays), ram speed does matter ... for pure synthesis applications, the code doesn't require so many memory transactions, an the used memory chunks are likely to fit into the cpu caches ...
hth, tim
-- tim@klingt.org ICQ: 96771783 http://tim.klingt.org
Nothing exists until or unless it is observed. An artist is making something exist by observing it. And his hope for other people is that they will also make it exist by observing it. I call it 'creative observation.' Creative viewing. William S. Burroughs
Tim Blechmann wrote:
Anyone know if PD is aided by SMP?
basically no ...
With the small exception that, as Hans mentioned, two cores will be of benefit because the graphics process can run on its own core.
I'm throwing together a new box in which I will install UbuntuStudio with the intention of using it as a live processing rig for live performance using PD. I'm interested in discovering whether it makes more sense to get a Core 2 Quad processor that runs with a bus speed of 1066 or a Core 2 Duo processor that runs with a bus speed of 1333.
to make use of a multicore machine the only way to utilize all cores is to run several instances of pd, that are connected via jackdmp.
Now *there's* an idea. Would that really work? What would be the downside -- aside from the memory needed to run multiple copies of PD? I can imagine a very powerful modular system built on this model.
Phil Stone http://pkstonemusic.com/pubmusic.html
Tim Blechmann wrote:
Anyone know if PD is aided by SMP?
basically no ...
With the small exception that, as Hans mentioned, two cores will be
of benefit because the graphics process can run on its own core.
This is not insignificant.
I'm throwing together a new box in which I will install UbuntuStudio with the intention of using it as a live processing rig for live performance using PD. I'm interested in discovering whether it
makes more sense to get a Core 2 Quad processor that runs with a bus speed of 1066 or a Core 2 Duo processor that runs with a bus speed of
1333.to make use of a multicore machine the only way to utilize all
cores is to run several instances of pd, that are connected via jackdmp.Now *there's* an idea. Would that really work? What would be the
downside -- aside from the memory needed to run multiple copies of
PD? I can imagine a very powerful modular system built on this model.
More importantly, can multiple instance of PD run under one user or
would it need to be under different users? Also, would they be able
to communicate with each other, or would Jackdmp handle the
intercommunication?
Matt
Phil Stone http://pkstonemusic.com/pubmusic.html
On Tue, 21 Aug 2007, Matthew Polashek wrote:
Tim Blechmann wrote:
Anyone know if PD is aided by SMP?
basically no ...
With the small exception that, as Hans mentioned, two cores will be of benefit because the graphics process can run on its own core.
This is not insignificant.
This affects the GUI and the display server. For anything [pix_...] it runs in the same process as the rest.
I've started to accelerate TkCanvas so that it takes less CPU and/or GPU. I often get as much as a fourfold increase of CPU usage, but there are downsides that need to be corrected as well. TkCanvas needs acceleration in general for a wide range of situations. I have more ideas to try.
More importantly, can multiple instance of PD run under one user or would it need to be under different users? Also, would they be able to communicate with each other, or would Jackdmp handle the intercommunication?
You always can run Pd several times as the same user at the same time, although for OSX you have to do something slightly special, as was mentioned here earlier this month.
_ _ __ ___ _____ ________ _____________ _____________________ ... | Mathieu Bouchard - tél:+1.514.383.3801, Montréal QC Canada
With the small exception that, as Hans mentioned, two cores will be of benefit because the graphics process can run on its own core.
the benefit is that minimal, that it's hardly worth mentioning ... just run your favorite patch and look at the used cpu time ... (for the patches that i tested, the cpu time used by the gui process is less than 0.1% of the time used by the kernel)
to make use of a multicore machine the only way to utilize all cores
is
to run several instances of pd, that are connected via jackdmp.
Now *there's* an idea. Would that really work? What would be the downside -- aside from the memory needed to run multiple copies of PD?
the problems are:
it is always the question, if you can manually split your dsp graph in a reasonable way ...
pd's (which is less efficient than nova's :) ... so using _many_ pd instances is probably a bad idea
for simple controls (OSC or netsend/receive) difficult for shared resources (buffers, busses)
I can imagine a very powerful modular system built on this model.
i somehow doubt, that i would make sense to use a jackdmp-style multicore scheduling algorithm for a max/pd/nova dsp graph, which can easily contain thousands of nodes (jack graphs are usually rather small), because of the scheduling overhead ...
however, i was thinking about ways to implement a hybrid system with automatic segmentation of the dsp graph into parallel dsp chains that can be scheduled with a dataflow algorithm ... but it would require lots of performance tests to tweak the heuristics of the graph segmentation ... for now, i had neither time nor funding ... (but maybe it is an interesting topic for my master thesis?)
tim
-- tim@klingt.org ICQ: 96771783 http://tim.klingt.org
Nothing exists until or unless it is observed. An artist is making something exist by observing it. And his hope for other people is that they will also make it exist by observing it. I call it 'creative observation.' Creative viewing. William S. Burroughs
Tim Blechmann wrote:
With the small exception that, as Hans mentioned, two cores will be of benefit because the graphics process can run on its own core.
the benefit is that minimal, that it's hardly worth mentioning ... just run your favorite patch and look at the used cpu time ... (for the patches that i tested, the cpu time used by the gui process is less than 0.1% of the time used by the kernel)
It's nice to use vu-meters without affecting the cpu available to audio patches on a core-duo. The UI process gets up to about 5% (of one processor) on my most complex patch; it's nice to keep that separate from the audio, of which I tend to need as much as I can get.
I'm not disagreeing, really, it's not that significant, but it's better than nothing.
to make use of a multicore machine the only way to utilize all cores
is
to run several instances of pd, that are connected via jackdmp.
Now *there's* an idea. Would that really work? What would be the downside -- aside from the memory needed to run multiple copies of PD?
the problems are:
- scalability: you need (at least) as many pd instances as cpu cores...
it is always the question, if you can manually split your dsp graph in a reasonable way ...
That's what the modular design would accomplish. Each module would have, at a minimum, audio outputs and optional audio inputs.
Come to think of it, this probably wouldn't work very well unless simple control messages of some kind (OSC, netpd, actual PD messages) could pass between the instances, too -- otherwise, each module would have to be set up and initialized separately, which would be time-consuming in a large system.
- performance: jackdmp's dsp graph scheduling is less efficient than
pd's (which is less efficient than nova's :) ... so using _many_ pd instances is probably a bad idea
- communication overhead: you need to synchronize the instances ... easy
for simple controls (OSC or netsend/receive) difficult for shared resources (buffers, busses)
So jackdmp wouldn't be good at patching say, 32 different generation modules (constituting entire "synthesizers") to a nice long, patchable filter chain to final audio output? Rats. That's critical to this being a viable fantasy.
I can imagine a very powerful modular system built on this model.
i somehow doubt, that i would make sense to use a jackdmp-style multicore scheduling algorithm for a max/pd/nova dsp graph, which can easily contain thousands of nodes (jack graphs are usually rather small), because of the scheduling overhead ...
That's too bad. I'll take your word for it that jackdmp wouldn't be able to manage the inter-process connection in a scalable way -- I'm not familiar with how it works. I'm disappointed, because it sounded like a cheap (and yes, slightly inconvenient -- but better than nothing) way to scale up PD with SMP.
however, i was thinking about ways to implement a hybrid system with automatic segmentation of the dsp graph into parallel dsp chains that can be scheduled with a dataflow algorithm ... but it would require lots of performance tests to tweak the heuristics of the graph segmentation ... for now, i had neither time nor funding ... (but maybe it is an interesting topic for my master thesis?)
I can tell you're talking about the *right* way to do all this. I'm just hoping there's some interim possibility, because even by this time next year, we'll be seeing a lot more n-cores where n > 2.
Best of luck to you in your endeavors, Tim (especially Nova).
Phil
That's too bad. I'll take your word for it that jackdmp wouldn't be able to manage the inter-process connection in a scalable way -- I'm not familiar with how it works. I'm disappointed, because it sounded like a cheap (and yes, slightly inconvenient -- but better than nothing) way to scale up PD with SMP.
if you can split your pd patches to a small number of instances (say 10), you'd probably have some benefit ... running 50 instances of pd is probably not a good idea ...
cheers, tim
-- tim@klingt.org ICQ: 96771783 http://tim.klingt.org
The price an artist pays for doing what he wants is that he has to do it. William S. Burroughs
On Tuesday 21 August 2007 22:25, Tim Blechmann wrote:
i somehow doubt, that i would make sense to use a jackdmp-style multicore scheduling algorithm for a max/pd/nova dsp graph, which can easily contain thousands of nodes (jack graphs are usually rather small), because of the scheduling overhead ...
however, i was thinking about ways to implement a hybrid system with automatic segmentation of the dsp graph into parallel dsp chains that can be scheduled with a dataflow algorithm ... but it would require lots of performance tests to tweak the heuristics of the graph segmentation ... for now, i had neither time nor funding ... (but maybe it is an interesting topic for my master thesis?)
Have you considered delegating these worries to something along the lines of threadweaver, which is designed for just this?
http://developer.kde.org/documentation/library/cvs-api/kdelibs-apidocs/threa...
robert.
On 8/21/07, Tim Blechmann tim@klingt.org wrote:
to make use of a multicore machine the only way to utilize all cores
is
to run several instances of pd, that are connected via jackdmp.
Now *there's* an idea. Would that really work? What would be the downside -- aside from the memory needed to run multiple copies of PD?
the problems are:
- scalability: you need (at least) as many pd instances as cpu cores...
it is always the question, if you can manually split your dsp graph in a reasonable way ...
- performance: jackdmp's dsp graph scheduling is less efficient than
pd's (which is less efficient than nova's :) ... so using _many_ pd instances is probably a bad idea
- communication overhead: you need to synchronize the instances ... easy
for simple controls (OSC or netsend/receive) difficult for shared resources (buffers, busses)
one additional problem: some algorithms are exclusively serial.... This is a problem that some scientists face when they bring their matlab code to run on a cluster. They write the algorithms in serial, then they expect to have it perform faster. The programmer has to know serial vs parallel programming techniques, and when parallelism is possible or not.
I can imagine a very powerful modular system built on this model.
however, i was thinking about ways to implement a hybrid system with automatic segmentation of the dsp graph into parallel dsp chains that can be scheduled with a dataflow algorithm ... but it would require lots of performance tests to tweak the heuristics of the graph segmentation ... for now, i had neither time nor funding ... (but maybe it is an interesting topic for my master thesis?)
Agreed, it is an interesting topic. But maybe a generic (applies to all multi-processor systems) solution is not the best way to go. How about just concerning yourself with one instance, one specific set of hardware (for example, see the Storm-1 DSP from Stream Processing, or (cheaper) a quad core Intel). That would be significant, by itself.
One of the limitations with the Pd DSP chain *is* it's style of modularity. The stream is broken down into indivisible blocks. The tree is parallel at the top, but as you go down the tree, it becomes more and more serial. There would be a bottleneck, where the parallel processes aren't used. In order to get a generic speedup, those "indivisible blocks" have to be divisible. And this is not always possible--
note: not complainin'--hhh--just like to be aware that trying to "make parallel" a software built for serial calculations is a lot more work than it's worth. You'd have to start almost from scratch to design an ideal parallelized Pd.
Chuck
tim
-- tim@klingt.org ICQ: 96771783 http://tim.klingt.org
Nothing exists until or unless it is observed. An artist is making something exist by observing it. And his hope for other people is that they will also make it exist by observing it. I call it 'creative observation.' Creative viewing. William S. Burroughs
PD-list@iem.at mailing list UNSUBSCRIBE and account-management -> http://lists.puredata.info/listinfo/pd-list
One of the limitations with the Pd DSP chain *is* it's style of modularity. The stream is broken down into indivisible blocks. The tree is parallel at the top, but as you go down the tree, it becomes more and more serial. There would be a bottleneck, where the parallel processes aren't used. In order to get a generic speedup, those "indivisible blocks" have to be divisible. And this is not always possible--
afaict, it is just a problem of scheduling ... scheduling a serialized (topologically sorted) dsp graph is very easy and very efficient ... just iterate over an array (nova) or a memory region (pd) ... it is actually scheduled in advance ...
a parallel dsp graph scheduler would introduce some dispatching code between the nodes ... probably we need to maintain ready/waiting queues that we have to access in order to make our scheduling decisions ... which is way more expensive than just going to the next node in a dsp chain ...
so for small "parallel" graphs like: osc~ line~ | / | / *~ | dac~
the parallel execution of osc~ and line~ is probably more expensive than running osc~->line~->*~->dac~
so i don't think, it is a problem of splitting up indivisible blocks, but rather to combine these indivisible blocks to reasonably large serial chunks ...
You'd have to start almost from scratch to design an ideal parallelized Pd.
yes, probably :)
tim
-- tim@klingt.org ICQ: 96771783 http://tim.klingt.org
Nothing exists until or unless it is observed. An artist is making something exist by observing it. And his hope for other people is that they will also make it exist by observing it. I call it 'creative observation.' Creative viewing. William S. Burroughs