Hallo, (cc'ing pd-list)
padawan12 hat gesagt: // padawan12 wrote:
Yes, I strongly agree with this, with a couple of caveats. It's something I'm struggling to formulate on other subject threads and is tied up with the process of "parameterisation".
In the old days you basically had 3 layers
Performance data sitting on top of the patch sitting on top of the hardware. The hardware was an immutable given, and the patch could be seen like a "firmware" on which you could base a performance.
The line between pre-configuration and real-time control has always been blury and shifting. In some ways it is an artifice of historical design practices, but it has utility to reduce performance data.
You can see the "patch" as being the data that is factored out of a performance, stuff that will remain the same for every note/event.
On the other hand, a properly flexible synthesiser can address every parameter on every note, or even at every computation tick, the synth is constantly updated. In this model we stop viewing the synth as a fixed instrument. This fits well with the ground between explicit synthesis (human parameterised) and resynthesis (where the parameters are analytic signals.)
If you take this model to its logical conclusion, it's an argument agaist "patches" of any kind whatsoever. Patches don't exist anymore. Instead you have only a rich set of performance/control data.
Problem is, it's unmanagable in most practical cases.
So, the optimal digital synthesiser is a layered model with the "patch" data as an interpreter of performance data, which passes it down to the synthesis subsystem. That synthesis engine/gubbins/guts should be as you describe, unsullied by any design decisions hardcoded into it.
This is essential in my model for effective real-time clientside synthesis of sounds on a replicated multiplayer network game. The actual DSP parts that do the hard work are very small, and must be split from the data that tells them what to do - that comes over the network or from the physics engine.
That's actually exactly the setup I used when I was hit with problems in netsend/netreceive: I'm using a physics engine and graphical visualisation in one Pd, then connect this to another Pd which functions as the synth engine. The synth-"patch" is very simple and small, while most of the work is done in the other Pd. Physics is more demanding than sound synthesis in this case.
I am hoping to expand this analysis into a whole chapter on the issues of parameterisation. Many more questions are raised, such as the need for a fourth layer that sits between the synth and patch interpreter to manage scaling and limits, a kind of realtime type and bounds checking and coercion process. Clearly this doesn't belong in the synthesis core either.
Cyrille Henry and Ali Momeni wrote about this in e.g. their paper: "Dynamic Autonomous Mapping Layers for Concurrent Control of Audio and Video Synthesis" where they view this "fourth layer" or mapping layer as a dynamic system itself, that is, it's something that follows its own rules and can change as well. This is just one example where a naive way of state saving hits at borders: What if you want to programmatically change your "state" through interpolation or other means? If this state is distributed over many patch file or hardcoded into message boxes, processing the state itself becomes hard or impossible. I admit that Memento doesn't have a solution for this problem yet, and I'm still not sure how it can get one.
Until this is all explained clearly the semantic problem is what we mean by a "patch". Most peoples synthesisers contain all of the five layers, which they call a patch, so naturally the fixed "patch" values go inside that. It isn't until you have to decouple the interface from the DSP gubbins that you realise the bad software engineering decisions in there.
Now I'll first have to look up what "gubbins" are, but I have a guess ... ;)
Frank Barknecht _ ______footils.org_ __goto10.org__