I checked it out (not read the _whole_ thread to the end) but, In what way can the current tabread4~ interpolation, which is discontinuous even in its 1st derivative, be superior to true cubic interpolation? Even at transpositions near to zero, I can't see what's the advantage, nor what it is supposed to minimize.
both are truly cubic interpolations.
IIRC, one kind of cubic interpolation is designed to go through all four points, and the other kind is designed to be pieced with other cubic interpolations, and Miller confused the two and left the bug there.
Miller's is a true implementation of the former -- his is a Lagrange interpolator which goes through all points -- it's algebraically identical to the cubic interpolator in csound, and so it should have a similar "sound" as any of the table-reading opcodes in csound that also employ cubic interpolation.
The latter is an Hermite interpolator which uses the outside points to approximate the first derivative -- the resulting curve only passes through the middle two points, but doesn't go through the outside two; this ensures that as it's pieced together the first derivative will be continuous at the junctions. It's algebraically identical to the cubic interpolator in supercollider.
They're two different approaches -- each has its own frequency response, but both are true cubics. If you want to match all four points AND the first derivatives, you have to use a 5th-order polynomial. The formulas are easily derivable using the Gaussian method, and it's easy to implement all these as a library of functions that can be accessed by the relevant objects, where the user can choose which type of interpolation he/she wants to use.
Matt
Hi Matt
Thanks for the detailed explanation. I still have troubles getting the idea of the Lagrange interpolator in the context of [tabread4~]. You say, that it finds the cubic polynomial which hits all four points. But what is the advantage of that? As I understand [tabread4~], if the index is between 5 and 6, it will use the cubic hitting the points at indizes 4, 5, 6 and 7. If the index is between 6 and 7, it will use the cubic going through the points at 5, 6, 7 and 8. So for the former the fact, that the curve hits also the points 4 and 7 seems irrelevant and so does it for the latter for the points 5 and 8, since always only the segment between floor(n) and ceiling(n) appears in the result. Or is it my misunderstanding and this is completely wrong? It seems logical to me, that discontinuities in the first derivative are avoided in order not to add any partials to the signal. What I don't get is why it is good to hit all four points, if the segments "outside" the middle segment are ignored/not used for the result.
I haven't studied those things in school, so please forgive, if I am asking things with completely wrong assumptions. I am just trying to understand why [tabread4~] is good for what it is.
Roman
On Mon, 2010-03-29 at 16:33 -0400, Matt Barber wrote:
Miller's is a true implementation of the former -- his is a Lagrange interpolator which goes through all points -- it's algebraically identical to the cubic interpolator in csound, and so it should have a similar "sound" as any of the table-reading opcodes in csound that also employ cubic interpolation.
The latter is an Hermite interpolator which uses the outside points to approximate the first derivative -- the resulting curve only passes through the middle two points, but doesn't go through the outside two; this ensures that as it's pieced together the first derivative will be continuous at the junctions. It's algebraically identical to the cubic interpolator in supercollider.
They're two different approaches -- each has its own frequency response, but both are true cubics. If you want to match all four points AND the first derivatives, you have to use a 5th-order polynomial. The formulas are easily derivable using the Gaussian method, and it's easy to implement all these as a library of functions that can be accessed by the relevant objects, where the user can choose which type of interpolation he/she wants to use.
Matt
LONG, sorry.
On Mon, Mar 29, 2010 at 7:03 PM, Roman Haefeli reduzierer@yahoo.de wrote:
Hi Matt
Thanks for the detailed explanation. I still have troubles getting the idea of the Lagrange interpolator in the context of [tabread4~]. You say, that it finds the cubic polynomial which hits all four points. But what is the advantage of that? As I understand [tabread4~], if the index is between 5 and 6, it will use the cubic hitting the points at indizes 4, 5, 6 and 7. If the index is between 6 and 7, it will use the cubic going through the points at 5, 6, 7 and 8. So for the former the fact, that the curve hits also the points 4 and 7 seems irrelevant and so does it for the latter for the points 5 and 8, since always only the segment between floor(n) and ceiling(n) appears in the result. Or is it my misunderstanding and this is completely wrong?
That's right -- they're "used" in the sense that a cubic interpolator makes use of more of the information of the surrounding samples than, say, a linear interpolator -- but the segments themselves aren't used literally in the output. This would be true, though, of a 6-point Lagrange polynomial interpolator, or a 32-point windowed sinc interpolator -- you don't use the other segments, but just the segment between the two points in question generated by the information granted to you by the other samples.
It seems logical to me, that discontinuities in the first derivative are avoided in order not to add any partials to the signal. What I don't get is why it is good to hit all four points, if the segments "outside" the middle segment are ignored/not used for the result.
Here's how I understand it -- the following is as much for me as it is anyone else, and please please correct me if I'm wrong:
Both kinds of interpolation add partials via aliasing -- you can think of it as a kind of resampling rather than interpolation per se:
By the Shannon-Nyquist theorem, you can regain a continuous signal from a sampled signal EXACTLY by convolving the sampled signal with a normalized sinc function:
http://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampling_theorem#Recons...
provided that the original continuous signal was bandlimited and that the sample rate was higher than twice the highest frequency in the original (and to within distortions introduced by quantization). This is to say that if you centered a sinc function over every sample and scaled it to that sample (i.e. multiplied the entire function by the sample constant), and added all of those functions up from negative infinity in time to positive infinity in time, you'd have the original continuous signal that was sampled. Another way of thinking of this is that the you've perfectly filtered out all of the aliased images above and below positve and negative nyquist -- convolving with such a sinc function is the same as implementing a perfect brickwall filter whose cutoffs are right at the nyquist. You're then free to sample this continuous signal at a higher sample rate -- this is exactly the same as implementing a perfect interpolator for upsampling; interpolation does all of this in one step (for downsampling you have to do some extra filtering to get rid of partials above the new target Nyquist).
Unfortunately, the sinc function extends forever in both directions, so you have to approximate it, and libraries like libsamplerate do. However, you can think of each of these interpolators as impulse responses with convolution as well. Imagine that you had a bunch of samples that you were going to interpolate over, with the following condition -- one of the samples was max amp (an impulse), and all were zero. The impulse response of an interpolator is the continuous domain result of running that interpolator on the impulse in the discrete domain -- you can easily visualize it in Pd by running the attached patch.
The impulse response of an interpolator usually approximates something that looks like part of a sinc function. But since none of them are sinc functions, they all filter out the aliased images less than perfectly. This means that if you made a continuous signal using these interpolators, it would leave a little of the aliased copies of the digital signal's frequency domain -- these would be generally pretty high, but still present. Now, if you resample this (up or down), those high frequencies will necessarily be higher than the new Nyquist (since they run to infinity), so they can foldover and cause audible "wrong" partials. Each also has a non-flat frequency response in the band under the original Nyquist as well. If you're reading over the table periodically, the error will also be periodic and so will exist as harmonics of the fundamental.
One really good way to think, then, is in terms of the continuous frequency response of the interpolator. In that long, long discussion a couple years ago, Chuck Henry made the following post where he showed the impulse response of [tabread4~] vs. the [tabread4c~]
http://lists.puredata.info/pipermail/pd-list/2008-06/063408.html
(look at the graph)
We came up with lots of 4-point schemes -- you can come up with a 7th-degree polynomial that matches all four points, both first derivatives and both second derivatives -- each has a different frequency response. I think we even tried out a cosine-segment interpolator that was derived from zero-padding the frequency domain of the fft of the four points.
I haven't studied those things in school, so please forgive, if I am asking things with completely wrong assumptions. I am just trying to understand why [tabread4~] is good for what it is.
I haven't studied them in school either which is why I worry about the above explanation. I think [tabread4~] is good for what it is for a couple of reasons, neither of them particularly compelling:
computer-music applications like csound -- people are very used to how it behaves.
different sound; not necessarily "worse" for all sounds.
I have read that the Lagrange interpolators have better stopband attenuation and Hermites have flatter passband response, but I'm not sure this is true -- anyway, google "hermite vs. lagrange".
Matt
Roman
On Mon, 2010-03-29 at 16:33 -0400, Matt Barber wrote:
Miller's is a true implementation of the former -- his is a Lagrange interpolator which goes through all points -- it's algebraically identical to the cubic interpolator in csound, and so it should have a similar "sound" as any of the table-reading opcodes in csound that also employ cubic interpolation.
The latter is an Hermite interpolator which uses the outside points to approximate the first derivative -- the resulting curve only passes through the middle two points, but doesn't go through the outside two; this ensures that as it's pieced together the first derivative will be continuous at the junctions. It's algebraically identical to the cubic interpolator in supercollider.
They're two different approaches -- each has its own frequency response, but both are true cubics. If you want to match all four points AND the first derivatives, you have to use a 5th-order polynomial. The formulas are easily derivable using the Gaussian method, and it's easy to implement all these as a library of functions that can be accessed by the relevant objects, where the user can choose which type of interpolation he/she wants to use.
Matt
On Mon, 2010-03-29 at 21:06 -0400, Matt Barber wrote:
LONG, sorry.
Thanks again for your time and patience.
One really good way to think, then, is in terms of the continuous frequency response of the interpolator. In that long, long discussion a couple years ago, Chuck Henry made the following post where he showed the impulse response of [tabread4~] vs. the [tabread4c~]
http://lists.puredata.info/pipermail/pd-list/2008-06/063408.html
(look at the graph)
Assuming, that the goal is to avoid anything above Nyquist, it seems that [tabread4c~] does its job better (blue graph). The area below the graph line and right of the nyquist vertical line seems smaller for the blue graph than for the red graph. Or am I tricked by the logarithmic view of those graphs?
I haven't studied them in school either which is why I worry about the above explanation. I think [tabread4~] is good for what it is for a couple of reasons, neither of them particularly compelling:
- It's better than linear interpolation, and has wide use in other
computer-music applications like csound -- people are very used to how it behaves.
This would apply to both, [tabread4~] and [tabread4c~], wouldn't it?
- It's similar to the alternative being discussed, but with a
different sound; not necessarily "worse" for all sounds.
It seems intuitively logical to me that discontinuities in the first derivative (which are basically edges in the curve) create high-frequent noises. When listening to both (back then), I had the feeling that there are more noises audible with [tabread~] than with [tabread4c~]. So to me it still remains unclear in what aspect [tabread4~] is superior to [tabread4c~], from both a theoretical and from an empirical perspective.
Roman
___________________________________________________________ Der frühe Vogel fängt den Wurm. Hier gelangen Sie zum neuen Yahoo! Mail: http://mail.yahoo.de