On Sat, Jul 19, 2008 at 3:39 PM, Matt Barber brbrofsvl@gmail.com wrote:
Chuck,
Thanks again for this. Quick question: out of curiosity, how much would this differ from the one which has the standard derivative approximations?
Hey, Matt
It is a trade-off. In exchange for getting more roll-off above the Nyquist frequency, we get poorer performance in the high frequencies.
Also, if one wanted to put together the one with the standard approximations, would you use the best approximations available for each derivative, or would you use the ones which come from the same "series" of approximations? I don't know how to call them, but one series of approximation derivations need a 3 points for 1st and 2nd derivatives, and 5 points for 3rd and 4th -- while the next series up needs 5 points for 1st and 2nd and 7 for 3rd and 4th -- can you mix these freely in a 6-point interpolation using the 5-point approximations for everything?
To use better approximations for the derivative in the 6-point setting, we need to go back to using a 3rd degree polynomial. This leads to additional free constraints that can be used to set the 1st derivative with better high-frequency response. We lose the continuity of the second & third derivatives in the process.
We can mix our approximations somewhat freely. I haven't come up with a good rationale for choosing the derivative coefficients, yet....
I guess one important next direction is to work on the anti-aliasing problem -- you mentioned modulating the interpolation coefficients depending on the speed through the table -- would this be a continuous thing, or would there be a pre-defined set of ideal functions among which to choose? Or would this be a matter of figuring out the linear combination of the appropriate anti-aliasing filter (which might need to change with each sample?) and a standard interpolation function? (or am I totally misunderstanding?)
Thanks again,
Matt
The interpolation function is a filter. There would be no need to have an anti-aliasing filter and and interpolation function--there's just the one function. We use the fast interpolating function at speeds <= 1. But we need a general interpolation function as a function of speed that converges to the original function as the speed decreases to 1. This would provide the needed generality and flexibility, while having the same general characteristics of the fast interpolating function on which it is based. I'm open to any ideas on this thing... I think I need to take my eyes off of interpolation for a while, and stop beating up the pd list with tables :)
I've got two basic ideas that I'm playing with. The first is to modify the interpolation function continuously adding a series of "bumps" that are spaced exponentially outward from the original function. If there's some good spectral properties, there could be a way to make a smooth transition and hold the number of calculations to O(log(speed)) instead of O(speed)
My second idea is to replace the points and their derivatives, with filters (low-pass filters for the points and band-pass filters for the derivatives). Then, fit a polynomial as before and interpolate. Like existing schemes, this could be turned into continuous functions for impulse response, which vary as functions of speed.
Any ideas?
Chuck