On Fri, Jun 8, 2012 at 9:18 AM, Roman Haefeli reduzent@gmail.com wrote:
On Mit, 2012-06-06 at 11:07 -0400, Matt Barber wrote:
On Wed, 2012-06-06 at 09:53 +0200, Jeppi Jeppi wrote:
Hey, I wonder whether there is something similar to Max' ipoke~ (an interpolating buffer~ writer) for Pd. I should need it for some physical modelling and resampling stuff. Otherwise, I could implement it myself. It seems only interpolated reading is available (tabread4~ and similar ones), not writing.
This somehow reminds of the thread about settable [receive]. Is there really a need for the ability to do interpolated writing? Conceptually, is there any restriction if it is lacking? Can't everything that employs interpolated writing be achieved with interpolated reading as well?
Maybe I'm not thinking hard enough...
Roman
If you're using it for modeling an acoustic space, for instance, an interpolating write is the proper tool for stationary microphone and moving sources. Interpolated read is more like a moving microphone.
Imagine, for instance, how you would calculate the delay time of a sound source moving toward a microphone, from the microphone's point of view. Say it's 50ms away right now, and in a second it will be 40ms away. The correct model of this is not to set an interpolated variable delay read at 50ms right now, and ramp to 40ms over one second - that would be looking 50ms into the past (of the source) right now and 40ms into the past at 1 second from now, where what you need is to look 50ms into the past of the source 50ms from now, and 40ms into the past at 1040ms from now - so you'd need to delay the the signal controlling the variable delay read itself by the appropriate amount, and that signal would have to have the same information in it that you're trying to generate with the read in the first place.
This problem is easily solvable with a variable write (assuming for the moment a stationary read) - with a variable write, it is as though you're projecting samples into the read's future -- exactly what you want.
You can use the same logic for needing to use variable write into a table rather than a delay line.
I see how it seems more logical/simple to model a moving source with a interpolating buffer writer. However, it's possible to model both cases, [moving source / stationary mic], [stationary source / moving mic] with a linear buffer writer and a interpolating buffer reader [1].
[1] http://en.wikipedia.org/wiki/Doppler_effect
Roman
Right - you can model the doppler effect with either. It's the precise timing of a moving sound source that I don't think you can get with just an interpolated read.
Look at the example I gave - say "S" is a source (make it a periodic signal at 100hz) and "M" is the mic. Let's say S starts 50ms (in terms of speed of sound) from M at T=0s and stays there for 10 seconds. Then let's say at T=10s S moves linearly toward M such that at T=11s it will be 40ms away, at which point it stops and remains stationary. For the second of movement, there will be an associated doppler shift -- but from the point of view of M, the doppler shift will NOT start at T=10s, but rather at T=10050ms, so if you're trying to do that with an interpolated read, you'll have to delay the signal controlling the delay by 50ms.
That's easy enough to do if all motion is linear, but the problem explodes if you want, say, circular motion with a field of stationary mics (e.g. a virtual stereo pair or a b-format ambisonic setup) and room reflections, where direction, polar patterns, and inter-channel timing is important. How would you model, for instance, a source moving in a spiral pattern around and away from a cardioid microphone? You have to delay not just the sound coming from the source but its polar coordinates so that you can model the mic response in real time. It comes down to needing the delay you're trying to generate with the variable read to control the variable read itself.
You'd want the same ability with a table as you would with a delay line if you wanted to record and process something ahead of time rather than in real time. Also, there aren't many good ways to model a sound source moving faster than the speed of sound aside from an interpolating write.
Room simulation is probably the clearest and most intuitive case for wanting an interpolating write, but one could imagine other physical-modeling scenarios that would require it.
Matt