On Wed, Apr 28, 2021 at 11:14 AM Miller Puckette msp@ucsd.edu wrote:
On Wed, Apr 28, 2021 at 10:56:58AM -0500, Charles Z Henry wrote: My 2 cents...
The 4-point interpolation scheme gets radically better if the signal it's used on is oversampled (error goes down asymptotically by 24 dB for each doubling of sample rate) - so my own strategy is simply to 4x upsample everything I send through tabread4~ or delread4~. This moves the "problem" to that od designing an upsampling filter, which is much easier than a general interpolator.
I found a related problem this fall, teaching a University Physics I class that got thrown my way at the last minute.... and I decided to make a couple lessons about numerically integrating systems of differential equations
The first one (Euler's method) worked just fine, but later on, I wanted to show we could move past the 1st order derivative approximation and get better results than just upsampling. Then, my simulations started blowing up during class and I realized I don't understand implicit methods as well as I thought I did. Whoops
So, the related problem is what is the best truncated differentiation kernel on [-a, 1]? Once sampled, you'd get the coefficients of a numerical derivative scheme that can be re-arranged into an implicit method.
Either case, it goes back to the spectrum of the kernel. The derivative approximation used in Euler's method is only good over a small range of frequencies near zero, so the signal has to be upsampled in order to produce good results. By making that kernel longer, you can get higher frequencies and a spectrum that more closely matches 2*pi*i*f (although I haven't found a best scheme for it either). I think this question is different from the one Runge-Kutta methods answer. I think there's something here to find that's relevant