On Thu, Jun 14, 2012 at 2:41 PM, Miller Puckette msp@ucsd.edu wrote:
I've been thinking about this for some days. I agree there are two fundamentally different approaches (A: deal with each incoming sample independently, for each one adding some sort of filter kernel into the table; or B: advancing systematically through the table, filling each point by interpolating from the input signal).
I think in approach A it's better not to attempt to normalize for speed since there would always be a step where you have to differentiate the location pointer to get a speed value and that's fraught with numerical peril. Plus, we don't know how much the 'user' will know about write pointer speed - perhaps there's a natural way to determine that, in which case the numerically robust way to deal is to have the user take care of it appropriately for the situation.
Aanyway, if we're simulating a real moving source (my favorite example being lightning) it's physically correct for the amplitude to go up if the source moves toward the listener, even to the point of generating a sonic boom.
In the other scenario it seems that the result is naturally normalized, in the sense that a signal of value '1' should put all ones in the table (because how else should you interpolate a function whose value is 1 everywhere?)
Scenario (B) would be naturally normalized, but there are a few difficulties with it. First, what would happen if you didn't move the write pointer? In scenario (A) you get the "sonic boom," (and depending on the signal and the filter kernel this could fluctuate, and you'll get less of an effect further from "ground zero." With scenario (B) you never write into the table at all because without an increment you'll never pass over a sample to write (but note, you will write a sample if the index is an integer).
Now, to my mind there are two other things to think about. If someone were to drive the index with white noise, with (A) you're mixing the kernel into the table at random and the result is the emergent effect. It's unclear what (B) should do, though -- first, does a leap backwards from 1024 to 2 interpolate all 1021 intervening samples? If so, then second, does it overwrite those, or mix the result into what's already there?
It seems you would not want it to interpolate over those samples if the table were 1024 samples long and the leap represented a wrap back to the beginning, and I suppose "mixing" vs. "overwriting" could be settable by the user.
Choosing (A) for the moment, for me the design goal would be, "if someone writes a gianl equal to 1 and sent it to points 0, a, 2a, 3a, ... within some reasonable range of stretch values _a_, would I end up with a constant (which I wold suggest should be 1/a) everywhere in the table? If not you'd hear some sort of _a_ - dependent modulation.
I think you have to put a bound on _a_ - if it's allowed to be unbounded there's no fixed-size kernel that will work, and varying the size of the kernel again involves judging the "velocity" _a_ from the incoming data which I argued against already.
I think this is right, but this brings up another design problem -- most sinc-based filter kernels have a value of 1 at 0 and 0 at all other integers, which usually means that if you were to write directly to integer indices you're writing in single samples rather a kernel (since the value of the kernel would be 0 at the surrounding places in the table).
Matt