Hallo, (cc'ing pd-list)
padawan12 hat gesagt: // padawan12 wrote:
Yes, I strongly agree with this, with a couple of caveats. It's something I'm struggling to formulate on other subject threads and is tied up with the process of "parameterisation".
In the old days you basically had 3 layers
Performance data sitting on top of the patch sitting on top of the hardware. The hardware was an immutable given, and the patch could be seen like a "firmware" on which you could base a performance.
The line between pre-configuration and real-time control has always been blury and shifting. In some ways it is an artifice of historical design practices, but it has utility to reduce performance data.
You can see the "patch" as being the data that is factored out of a performance, stuff that will remain the same for every note/event.
On the other hand, a properly flexible synthesiser can address every parameter on every note, or even at every computation tick, the synth is constantly updated. In this model we stop viewing the synth as a fixed instrument. This fits well with the ground between explicit synthesis (human parameterised) and resynthesis (where the parameters are analytic signals.)
If you take this model to its logical conclusion, it's an argument agaist "patches" of any kind whatsoever. Patches don't exist anymore. Instead you have only a rich set of performance/control data.
Problem is, it's unmanagable in most practical cases.
So, the optimal digital synthesiser is a layered model with the "patch" data as an interpreter of performance data, which passes it down to the synthesis subsystem. That synthesis engine/gubbins/guts should be as you describe, unsullied by any design decisions hardcoded into it.
This is essential in my model for effective real-time clientside synthesis of sounds on a replicated multiplayer network game. The actual DSP parts that do the hard work are very small, and must be split from the data that tells them what to do - that comes over the network or from the physics engine.
That's actually exactly the setup I used when I was hit with problems in netsend/netreceive: I'm using a physics engine and graphical visualisation in one Pd, then connect this to another Pd which functions as the synth engine. The synth-"patch" is very simple and small, while most of the work is done in the other Pd. Physics is more demanding than sound synthesis in this case.
I am hoping to expand this analysis into a whole chapter on the issues of parameterisation. Many more questions are raised, such as the need for a fourth layer that sits between the synth and patch interpreter to manage scaling and limits, a kind of realtime type and bounds checking and coercion process. Clearly this doesn't belong in the synthesis core either.
Cyrille Henry and Ali Momeni wrote about this in e.g. their paper: "Dynamic Autonomous Mapping Layers for Concurrent Control of Audio and Video Synthesis" where they view this "fourth layer" or mapping layer as a dynamic system itself, that is, it's something that follows its own rules and can change as well. This is just one example where a naive way of state saving hits at borders: What if you want to programmatically change your "state" through interpolation or other means? If this state is distributed over many patch file or hardcoded into message boxes, processing the state itself becomes hard or impossible. I admit that Memento doesn't have a solution for this problem yet, and I'm still not sure how it can get one.
Until this is all explained clearly the semantic problem is what we mean by a "patch". Most peoples synthesisers contain all of the five layers, which they call a patch, so naturally the fixed "patch" values go inside that. It isn't until you have to decouple the interface from the DSP gubbins that you realise the bad software engineering decisions in there.
Now I'll first have to look up what "gubbins" are, but I have a guess ... ;)
Frank Barknecht _ ______footils.org_ __goto10.org__
Frank Barknecht a écrit :
Hallo, (cc'ing pd-list)
...
That's actually exactly the setup I used when I was hit with problems in netsend/netreceive: I'm using a physics engine and graphical visualisation in one Pd, then connect this to another Pd which functions as the synth engine. The synth-"patch" is very simple and small, while most of the work is done in the other Pd. Physics is more demanding than sound synthesis in this case.
do you have exemple for this? with pmpd or msd, i was able to make most of the time patch with physics using very few CPU. i'm interested to know the limitation you can find in pmpd/msd.
i personnaly choose to avoid netsend / netreceive and compute everything in the same cpu/gpu. this is more optimised thant having lots of netsend/receive, so i've got better results when i have lot's of data for audio / video synthesys...
cyrille
Am 05.07.2006 um 13:09 schrieb cyrille henry:
i personnaly choose to avoid netsend / netreceive and compute
everything in the same cpu/gpu. this is more optimised thant having
lots of netsend/receive, so i've got better results when i have
lot's of data for audio / video synthesys...
this is interesting. i have made opposite experiences. to avoid
lockup of pd and audio-glitches i have to use more than one instance
of pd. preferably one with audio computation on and the other one
off. also midi interfaces become often completely useless when trying
to control gem stuff. so i use another instance to translate the midi
into netsend and receive the data to control the things in the other
instance which does the rendering. it is an awkward work-around but
according to my experience a necessary one.
could this be a bit platform specific too? i have this trouble on os
x and have seen the same behavior on m$ win too. is it possible that
this problem doesn't occur on linux?
m.
Max Neupert a écrit :
Am 05.07.2006 um 13:09 schrieb cyrille henry:
i personnaly choose to avoid netsend / netreceive and compute everything in the same cpu/gpu. this is more optimised thant having lots of netsend/receive, so i've got better results when i have lot's of data for audio / video synthesys...
this is interesting. i have made opposite experiences. to avoid lockup of pd and audio-glitches i have to use more than one instance of pd. preferably one with audio computation on and the other one off. also midi interfaces become often completely useless when trying to control gem stuff. so i use another instance to translate the midi into netsend and receive the data to control the things in the other instance which does the rendering. it is an awkward work-around but according to my experience a necessary one. could this be a bit platform specific too? i have this trouble on os x and have seen the same behavior on m$ win too. is it possible that this problem doesn't occur on linux?
well, i use a big and fat linux distribution with no specific real time kernel. so i don't think linux will be lot's better than osX or windows in this case. i think the differences commes from the patch i have a very big audiobuf size (~ 100ms) to avoid clicks. the sound and video processing share lot's of data, and would create huge netsend / netreceive dataflow
cyrille
m.
Hallo, cyrille henry hat gesagt: // cyrille henry wrote:
well, i use a big and fat linux distribution with no specific real time kernel. so i don't think linux will be lot's better than osX or windows in this case.
A stock Linux kernel in version 2.6.14 or higher has much better latency behaviour than Windows.
i think the differences commes from the patch i have a very big audiobuf size (~ 100ms) to avoid clicks. the sound and video processing share lot's of data, and would create huge netsend / netreceive dataflow
Same here (we do similar things here anyways, I suppose) Maybe I should switch back to a single Pd as well.
Frank Barknecht _ ______footils.org_ __goto10.org__
Frank Barknecht a écrit :
Hallo, cyrille henry hat gesagt: // cyrille henry wrote:
well, i use a big and fat linux distribution with no specific real time kernel. so i don't think linux will be lot's better than osX or windows in this case.
A stock Linux kernel in version 2.6.14 or higher has much better latency behaviour than Windows.
2.6.14 are not so old. i should make some new tests.
i think the differences commes from the patch i have a very big audiobuf size (~ 100ms) to avoid clicks. the sound and video processing share lot's of data, and would create huge netsend / netreceive dataflow
Same here (we do similar things here anyways, I suppose) Maybe I should switch back to a single Pd as well.
or maybe i sould swith to 2 pd :-)
cyrille
Ciao
Hallo, cyrille henry hat gesagt: // cyrille henry wrote:
Frank Barknecht a écrit :
Hallo, (cc'ing pd-list)
...
That's actually exactly the setup I used when I was hit with problems in netsend/netreceive: I'm using a physics engine and graphical visualisation in one Pd, then connect this to another Pd which functions as the synth engine. The synth-"patch" is very simple and small, while most of the work is done in the other Pd. Physics is more demanding than sound synthesis in this case.
do you have exemple for this? with pmpd or msd, i was able to make most of the time patch with physics using very few CPU.
Actually I was more referring to patch size here, sorry for being unclear. The cpuload of msd itself is very okay. (It's very high with my experimental ODE-pyexternal, but that is to be expected from rigid body physics. ;)
I'm mostly decoupling the two Pds because I don't want Gem to interfere with the audio side. I'm still looking for a good way to get all this into a satisfying package, because of course using netsend or OSC timing information also becomes a problem, especially non-blockaligned timing with vline~.
i personnaly choose to avoid netsend / netreceive and compute everything in the same cpu/gpu. this is more optimised thant having lots of netsend/receive, so i've got better results when i have lot's of data for audio / video synthesys...
Hm, maybe I should try this again. I also did some optimisations on the audio side recently, so running only one Pd might work again.
The good thing is, that this will just involve to replace netsend/OSC or whatever with send/receive. ;)
Frank Barknecht _ ______footils.org_ __goto10.org__
Frank Barknecht a écrit :
Hallo, cyrille henry hat gesagt: // cyrille henry wrote:
Frank Barknecht a écrit :
Hallo, (cc'ing pd-list)
...
That's actually exactly the setup I used when I was hit with problems in netsend/netreceive: I'm using a physics engine and graphical visualisation in one Pd, then connect this to another Pd which functions as the synth engine. The synth-"patch" is very simple and small, while most of the work is done in the other Pd. Physics is more demanding than sound synthesis in this case.
do you have exemple for this? with pmpd or msd, i was able to make most of the time patch with physics using very few CPU.
Actually I was more referring to patch size here, sorry for being unclear. The cpuload of msd itself is very okay.
ok
(It's very high with my experimental ODE-pyexternal, but that is to be expected from rigid body physics. ;)
well, one problem i imagine with ode is that regarding the algorythm used, time need for each iteration could not be constant. i.e processing power depend of the curent state of the simulation. do you have such problem or not?
cyrille
I'm mostly decoupling the two Pds because I don't want Gem to interfere with the audio side. I'm still looking for a good way to get all this into a satisfying package, because of course using netsend or OSC timing information also becomes a problem, especially non-blockaligned timing with vline~.
i personnaly choose to avoid netsend / netreceive and compute everything in the same cpu/gpu. this is more optimised thant having lots of netsend/receive, so i've got better results when i have lot's of data for audio / video synthesys...
Hm, maybe I should try this again. I also did some optimisations on the audio side recently, so running only one Pd might work again.
The good thing is, that this will just involve to replace netsend/OSC or whatever with send/receive. ;)
CIao
Hallo, cyrille henry hat gesagt: // cyrille henry wrote:
(It's very high with my experimental ODE-pyexternal, but that is to be expected from rigid body physics. ;)
well, one problem i imagine with ode is that regarding the algorythm used, time need for each iteration could not be constant. i.e processing power depend of the curent state of the simulation. do you have such problem or not?
It is a problem in theory, yes, because contact joints get created depending on the current position of objects etc. But in practice I didn't hit it yet, which might be related to the fact, that using much more than, say, 75-100 geometries is too much for my system anyway. ;(
But working with rigid bodies is very interesting, because for example you can make bouncing balls that not only have position, speed and acceleration (3x3=9) parameters, but they also have current orientation, rotation speed and rotation forces. So even with just one mass point, or rather: body one gets to use a lot more data to use for control or audio processes (like using acceleration for volume and rotation speed for frequency of an oscillator).
Frank Barknecht _ ______footils.org_ __goto10.org__
Frank Barknecht a écrit :
Hallo, cyrille henry hat gesagt: // cyrille henry wrote:
(It's very high with my experimental ODE-pyexternal, but that is to be expected from rigid body physics. ;)
well, one problem i imagine with ode is that regarding the algorythm used, time need for each iteration could not be constant. i.e processing power depend of the curent state of the simulation. do you have such problem or not?
It is a problem in theory, yes, because contact joints get created depending on the current position of objects etc. But in practice I didn't hit it yet, which might be related to the fact, that using much more than, say, 75-100 geometries is too much for my system anyway. ;(
that's a big problem.
But working with rigid bodies is very interesting, because for example you can make bouncing balls that not only have position, speed and acceleration (3x3=9) parameters, but they also have current orientation, rotation speed and rotation forces. So even with just one mass point, or rather: body one gets to use a lot more data to use for control or audio processes (like using acceleration for volume and rotation speed for frequency of an oscillator).
you can create a cube with 8 ponctual spheres. it's not really the same as rigid body simulation, but it's also a solution to create rotation etc. The pb comes from interaction beetween this structures...
i think the solution will comes with physical stuf hardcoded in the graphic card.
cyrille
Ciao
Hallo, cyrille henry hat gesagt: // cyrille henry wrote:
Frank Barknecht a écrit :
It is a problem in theory, yes, because contact joints get created depending on the current position of objects etc. But in practice I didn't hit it yet, which might be related to the fact, that using much more than, say, 75-100 geometries is too much for my system anyway. ;(
that's a big problem.
As soon as I've decided on the final interface of the ODE-Pd-object I guess I will try, if moving from Python to C++ will give a substantial performance boost. Though I doubt it will.
Frank Barknecht _ ______footils.org_ __goto10.org__
As soon as I've decided on the final interface of the ODE-Pd-object I guess I will try, if moving from Python to C++ will give a substantial performance boost. Though I doubt it will.
I doubt too. Most of the time in PyODE is spent in collision detection inside the ODE library, very little in the Python interface.
greetings, Thomas
Hallo, Thomas Grill hat gesagt: // Thomas Grill wrote:
As soon as I've decided on the final interface of the ODE-Pd-object I guess I will try, if moving from Python to C++ will give a substantial performance boost. Though I doubt it will.
I doubt too. Most of the time in PyODE is spent in collision detection inside the ODE library, very little in the Python interface.
Yes, that's how I think it is, too. Python has the advantage of being much easier to program, especially for a huge library as ODE, if you use some introspection trickery to provide a Pd interface to all the methods ODE provides with just a handfull of code lines.
Frank Barknecht _ ______footils.org_ __goto10.org__
On Tue, 18 Jul 2006, Frank Barknecht wrote:
Yes, that's how I think it is, too. Python has the advantage of being much easier to program, especially for a huge library as ODE, if you use some introspection trickery to provide a Pd interface to all the methods ODE provides with just a handfull of code lines.
At University of Ottawa we use introspection to provide an interface between LTIlib (http://ltilib.sourceforge.net/) and PureData.
One issue we have are that LTIlib is really made by C++ people to be used by C++ people, so the Python bindings are lagging behind (being made by a separate person for his own needs), and the Ruby bindings that we had to make (or rather, that Heriniaina Andrianirina made) are not quite complete. All bindings are made using SWIG, which is a tool that helps pretend that C++ supports introspection (ha!ha!ha!).
Once the Ruby bindings are made, the other problem is that Ruby, like Python, is oblivious to typing and overloading, which requires strange hacks in order to keep everything mostly automatically introspective. That's about 200 lines. The remaining 500 lines do the bindings with PureData and the integration with existing GridFlow structures.
A third problem is that LTIlib doesn't prevent people from making it segfault in any way, and that doesn't make it so suitable for embedding in interactive coding systems (e.g. PureData,Ruby,Python), so additional checks will have to be added in order to prevent the most common mistakes.
Of course, this is all being built using the research funds of (you guessed it) Alexandre Castonguay.
_ _ __ ___ _____ ________ _____________ _____________________ ... | Mathieu Bouchard - tél:+1.514.383.3801 - http://artengine.ca/matju | Freelance Digital Arts Engineer, Montréal QC Canada
On Tue, 18 Jul 2006, Thomas Grill wrote:
As soon as I've decided on the final interface of the ODE-Pd-object I guess I will try, if moving from Python to C++ will give a substantial performance boost. Though I doubt it will.
I doubt too. Most of the time in PyODE is spent in collision detection inside the ODE library, very little in the Python interface.
Maybe it's a good idea to check whether collision detection can be turned off for situations that don't need it.
Also, make sure its collision detection algorithms are O(n log n). I can't imagine them being O(n*n), but you never know: if the lib is really new and/or always used with low object counts, it's possible that they didn't bother with O(n log n). (I didn't look at their code.)
_ _ __ ___ _____ ________ _____________ _____________________ ... | Mathieu Bouchard - tél:+1.514.383.3801 - http://artengine.ca/matju | Freelance Digital Arts Engineer, Montréal QC Canada
On Tue, Jul 18, 2006 at 09:09:12AM -0400, Mathieu Bouchard wrote:
On Tue, 18 Jul 2006, Thomas Grill wrote:
Maybe it's a good idea to check whether collision detection can be turned off for situations that don't need it.
You can ask ODE to detect and do this for you automatically.
Also, make sure its collision detection algorithms are O(n log n). I can't imagine them being O(n*n), but you never know: if the lib is really new and/or always used with low object counts, it's possible that they didn't bother with O(n log n). (I didn't look at their code.)
ODE has two high level collision/culling algorithms to choose from. One is the most obvious one and is O(n^2). The other uses a nifty method of storing which objects are near eachother (and culling out those that couldn't possibly collide) and runs at O(n) which is better than O(n log n) for larger numbers of bodies.
There is another excellent clustering collision detection algorithm which speeds up collision detection drastically in a similar way, described in the book Game Programming Gems II, called recursive dimensional clustering. It can be used on spaces of arbitrary dimension and allows you to use fast sorting algorithms on cached lists to do the grunt work. I am hoping one day someone (maybe me) will make a new ODE space which uses the RDC method and do some profiling against the existing methods. My intuition says it will be quicker in many cases.
There are also many other ways to increase the speed of ODE at the expense of integrator stability and accuracy. Check the documentation for more info.
Best,
Chris.
chris@mccormick.cx http://mccormick.cx