Frank Barknecht a écrit :
Hallo, cyrille henry hat gesagt: // cyrille henry wrote:
Frank Barknecht a écrit :
Hallo, (cc'ing pd-list)
...
That's actually exactly the setup I used when I was hit with problems in netsend/netreceive: I'm using a physics engine and graphical visualisation in one Pd, then connect this to another Pd which functions as the synth engine. The synth-"patch" is very simple and small, while most of the work is done in the other Pd. Physics is more demanding than sound synthesis in this case.
do you have exemple for this? with pmpd or msd, i was able to make most of the time patch with physics using very few CPU.
Actually I was more referring to patch size here, sorry for being unclear. The cpuload of msd itself is very okay.
ok
(It's very high with my experimental ODE-pyexternal, but that is to be expected from rigid body physics. ;)
well, one problem i imagine with ode is that regarding the algorythm used, time need for each iteration could not be constant. i.e processing power depend of the curent state of the simulation. do you have such problem or not?
cyrille
I'm mostly decoupling the two Pds because I don't want Gem to interfere with the audio side. I'm still looking for a good way to get all this into a satisfying package, because of course using netsend or OSC timing information also becomes a problem, especially non-blockaligned timing with vline~.
i personnaly choose to avoid netsend / netreceive and compute everything in the same cpu/gpu. this is more optimised thant having lots of netsend/receive, so i've got better results when i have lot's of data for audio / video synthesys...
Hm, maybe I should try this again. I also did some optimisations on the audio side recently, so running only one Pd might work again.
The good thing is, that this will just involve to replace netsend/OSC or whatever with send/receive. ;)
CIao