It occurs to me that there exists one very obvious function for which the squared error is minimized for a 4-point interpolator. 4-point interpolator impulse functions have to be 0 outside the interval [-2,2].
So, E=|f(x)-sinc(x)|^2 is minimized when
f(x)={sinc(x) -2<x<2 , 0 elsewhere
I may be missing something but I'm afraid the E in your formula is not the error that is supposed to be minimized.
The ideally interpolated signal (which is the one in reference to which the error has to be minimized) is not just a sinc: it is the sum of an infinite series of sinc's centered at the sampled points and scaled with the sampled values.
(I won't try to write it in a latex-like fashon, I would certainly get it wrong - not because of latex syntax, I mean I would get it wrong even if I tried to write it down manually)
Please correct me if I am wrong
On Wed, Mar 31, 2010 at 5:12 PM, Matteo Sisti Sette matteosistisette@gmail.com wrote:
It occurs to me that there exists one very obvious function for which the squared error is minimized for a 4-point interpolator. 4-point interpolator impulse functions have to be 0 outside the interval [-2,2].
So, E=|f(x)-sinc(x)|^2 is minimized when
f(x)={sinc(x) -2<x<2 , 0 elsewhere
I may be missing something but I'm afraid the E in your formula is not the error that is supposed to be minimized.
Sorry, I often go kind of fast-and-loose with the math, but I think you'll see it's true within a certain context, which you may or may not accept.
The ideally interpolated signal (which is the one in reference to which the error has to be minimized) is not just a sinc: it is the sum of an infinite series of sinc's centered at the sampled points and scaled with the sampled values.
Let x be the series of samples, each multiplied by Dirac-delta functions at the sample times. Let S be the convolution operator which convolves a function by sinc(t) and let F be our arbitrary convolution operator which convolves by an interpolation function f(t).
Then, the quantities we need to compare are Sx and Fx where we want to minimize the L2 norm, the integral of the squared error (Sx - Fx)^2
|Sx-Fx|^2 = |(S-F)x|^2
The error depends on x the signal. Here, I want to make the *convenient* assumption that the spectrum of x is flat, since we want some kind of generality and we want to minimize average error across frequencies. This would make the problem equivalent to using just *one* dirac-delta in place of x and we would get the problem to reduce back to just the difference of the impulse responses
|sinc(t)-f(t)|^2
For a little while, I was going in circles on how to minimize operator norms, but it's not quite the right problem for that and I'd probably spend all day on it, that way :)
(I won't try to write it in a latex-like fashon, I would certainly get it wrong - not because of latex syntax, I mean I would get it wrong even if I tried to write it down manually)
I'm reluctant to try latex because it looks like too much work, but I think lyx (a wysiwyg latex editor) is more my speed.
Charles Henry escribió:
The error depends on x the signal. Here, I want to make the *convenient* assumption that the spectrum of x is flat, since we want some kind of generality and we want to minimize average error across frequencies. This would make the problem equivalent to using just *one* dirac-delta in place of x and we would get the problem to reduce back to just the difference of the impulse responses
|sinc(t)-f(t)|^2
Ah ok.
This *convenient* assumption is equivalent to (or at least implies) assuming that the only sample that matters for interpolating the signal between -2 and 2 is the one semple at 0. This seems to me a too much strong assumption.
I'm not saying that your conclusion is wrong (though I suspect it is).
Let's take a step back:
Here, I want to make the *convenient* assumption that the spectrum of x is flat
Stated this way, it sounds reasonable, doesn't it. If it does, then it means that by "flat spectrum" you mean the _power spectrum_ of x considered as a _stochastic process_ rather than a deterministic signal.
Brought to the domain of time, assuming x has a flat power spectrum means assuming x is white noise. (btw a closer-to-reality assumption would be that it is pink noise - but that's not the point here) Not a dirac delta.
So minimizing the error would be to minimize the power, or probably energy, of the error meant as a stochastic process.
Though I should have the notions to go a bit further in at least _formulating_ (not solving) the problem, those notions are a bit oxidated, if not completely gone from my head :(
But I'm sure it is not equivalent to minimizing the integral of the difference between the operators applied to a delta function.
I get what you're saying too, and I'm at least a little skeptical myself. But as I think about it generally, my entire approach to looking at these problems has been very similar.
I basically thought that when comparing interpolators, I could disregard the signals involved and just look at the properties of the impulse responses (or convolution kernels or spectra, etc...). So, if I can't do that, I really have to rethink what I know.
On Thu, Apr 1, 2010 at 10:44 AM, Matteo Sisti Sette matteosistisette@gmail.com wrote:
Here, I want to make the *convenient* assumption that the spectrum of x is flat
Stated this way, it sounds reasonable, doesn't it. If it does, then it means that by "flat spectrum" you mean the _power spectrum_ of x considered as a _stochastic process_ rather than a deterministic signal.
When it comes to the general class of functions with flat spectra, the only difference is in phase, right? But the error is the same in time domain as in frequency domain thanks to the isometric property of the Fourier transform. Our interpolation is the same as a convolution, so we're still just multiplying our spectra and the phase comes out differently in each frequency.
So, when we integrate the error^2 in the frequency domain, the phase makes no contribution, and then, it's really just the same thing as the error in the time domain. Then, all flat spectra are equivalent for this problem. I really am enjoying this math discussion, and I do want to be corrected or shown something I don't see yet. Please let me know if there's something wrong with what I'm saying.
Brought to the domain of time, assuming x has a flat power spectrum means assuming x is white noise. (btw a closer-to-reality assumption would be that it is pink noise - but that's not the point here) Not a dirac delta.
So minimizing the error would be to minimize the power, or probably energy, of the error meant as a stochastic process.
Though I should have the notions to go a bit further in at least _formulating_ (not solving) the problem, those notions are a bit oxidated, if not completely gone from my head :(
But I'm sure it is not equivalent to minimizing the integral of the difference between the operators applied to a delta function.
-- Matteo Sisti Sette matteosistisette@gmail.com http://www.matteosistisette.com
Charles Henry escribió:
When it comes to the general class of functions with flat spectra, the only difference is in phase, right? But the error is the same in time domain as in frequency domain thanks to the isometric property of the Fourier transform. Our interpolation is the same as a convolution, so we're still just multiplying our spectra and the phase comes out differently in each frequency.
I'm not sure I understand what you're saying here about the phase, buy I think the misleading part of youre reasoning is that you take a concept that makes sense in the context of stochastic processes, namely assuming a "flat spectrum", and acritically apply it in the context of deterministic signals where it has a completely different meaning.
You're trying to restrict the analysis to a convenient (but reasonable) class of signals, and to assume that the signal to be interpolated, x, belongs to that class. Right?
It doesn't make any sense, as far as I can see, to assume that the signal being interpolated belongs to the class of function whose spectrum has a flat modulus (and any phase). Why not assuming then, for example, that x(t) is a constant? (please don't take my tone as sarchastic)
What does make some sense (it is a strong hypothesis but discussing its plausibility would bring the discussion to a much higher level) is to treat the signal x as a stochastic process with a given power spectrum - such as flat, or pink.
But that means that the quantity you're minimizing is no longer an integral of the signal minus some other signal all squared: it is the expectation of something.
The power spectrum of a stochastic process x(t) is not the fourier transform of x(t), it is the fourier transform of tha autocorrelation function of x (or something like that).
You're trying to restrict the analysis to a convenient (but reasonable) class of signals, and to assume that the signal to be interpolated, x, belongs to that class. Right?
Well, sort of. What works well as an interpolator for one signal may not work well for another. The point I started from was asking the question, what would make a good measure of the error when we use a given interpolator?
So, if I just wanted to average across all frequencies the squared error, I thought the problem would be equivaled to this one:
E=|f(x)-sinc(x)|^2 is minimized when
f(x)={sinc(x) -2<x<2 , 0 elsewhere
And then it's the same as having an operator acting on a flat spectrum signal.
It doesn't make any sense, as far as I can see, to assume that the signal being interpolated belongs to the class of function whose spectrum has a flat modulus (and any phase). Why not assuming then, for example, that x(t) is a constant? (please don't take my tone as sarchastic)
What does make some sense (it is a strong hypothesis but discussing its plausibility would bring the discussion to a much higher level) is to treat the signal x as a stochastic process with a given power spectrum - such as flat, or pink.
So, I assumed the signal spectrum flat so that I could average over all the frequencies. True it doesn't fit the actual use cases and give us the error in a signal we'd actually like to see--it's just sort of a toy problem, but it goes back to the reason why we're looking at it in the first place, to consider what happens when we just choose one measure (L2 normed error in signal reconstruction averaged across all frequencies) and then find the best result.
This class of functions to consider is useful if we're going for rigourous math here... but maybe we've strayed too far outside the topic and should just stick to calculus?
Suppose we choose our metric and work it out. If the correct result doesn't behave well or doesn't fit our criteria, then how should we create a better measure?
But that means that the quantity you're minimizing is no longer an integral of the signal minus some other signal all squared: it is the expectation of something.
The power spectrum of a stochastic process x(t) is not the fourier transform of x(t), it is the fourier transform of tha autocorrelation function of x (or something like that).
The hardest class I ever had was stochastic analysis (as recent as 4 years ago), where we solved problems like this. Fundamentally, it's not too hard, but the details of the calculus are tricky. I'd prefer to stay away unless there's a real good reason to do so :)
Charles Henry escribió:
The hardest class I ever had was stochastic analysis (as recent as 4 years ago), where we solved problems like this. Fundamentally, it's not too hard, but the details of the calculus are tricky. I'd prefer to stay away unless there's a real good reason to do so :)
Well if you want to stay away from stochastic processes and consider the signal as a deterministic function, then you'll have to make assumptions that make sense for functions.
And a flat spectrum isn't. As you said at the very beginning (almost), assuming it has a flat spectrum implies it is a dirac delta.
Finding the interpolator that best interpolates a dirac delta is finding the interpolator that best matches the ideal interpolator (the sync), hence your result.
Ok, we cannot find an interpolator that is optimum for all classes of functions, so we have to choose some class of functions. Even provided your reasoning about phase was right (so allowing to extend the result not only to the delta function but to all function with a flat spectrum with whatever phase), I don't think the resulting class of function is much more general and much more interesting.
I think people often use a sinusoid to measure the quality of an interpolator (e.g. in Miller's book you find tables with the signal-to-noise ratio of the interpolator measured on a sinusoid, if i remember correctly). Maybe you could solve the problem of finding the best interpolator for a sinusoid. That would make a lot more sense than the best interpolator for a dirac delta. (note that I don't know if the result turns out to be the same)
If you try to solve the problem for a whole "class" of functions of some interest, I'm afraid you'll find out it is much (much much much) more complicated than dealing with stochastic processes.
Note that I'm not saying that
E=|f(x)-sinc(x)|^2 is minimized when
f(x)={sinc(x) -2<x<2 , 0 elsewhere
is a bad choice. I'm just arguing that your reasoning doesn't prove it is the best choice.
Indeed I think some software use
E=|f(x)-sinc(x)|^2 is minimized when
f(x)={sinc(x) -N<x<N , 0 elsewhere
for some value of N. For infinite N, this would be the perfect interpolator, so obviously for large N it is good enough.
The problem is that for N as little as 2, the truncation has non-negligible effects on the stop band, so the problem arises, whether another signal can reduce the effects of the truncation on the stop band, at the cost of some added ripple within the passband.
Now that I think about it, your truncated sync should have perfect-flat passband response, and big stopband "ripples" (how do you call the stopband ripples? cannot remember the word), so any improvement at stopband will have to be traded off with some increased passband ripple.
So here's what the truncated sync is best at: it's the one with the best (meaning flattest) bandpass response. Is this correct???
Note however that the highest part of the high frequency noise (due to the non-zero stopband part of the interpolator) will cause aliasing when resampling. The analysis of passband ripple and stopband "ripple" and whatever measure of their trade off doesn't take this into account: it just considers the passband ripple as passband distortion, and the stopband "ripple" as high-frequency noise. But after resampling part of that high frequency noise will be brought back to low frequency in form of aliasing. Note that the new Nyquist frequency depends on the resampling frequency and is not the same as the original Nyquist frequency, so how relevant aliasing is depends on the resampling rate. If yoy resample at 1:1 then _all_ high freq noise will become aliasing. Aliasing is often considered somewhat worse than anything else (correct me if I am wrong), I guess because it is specially audible, being perceived as something completely unconnected to the original signal.
This last digression is nothing specific to the truncated sync, it is just to say that while trading off between passband distortion and stopband noise, we have to consider that stopband noise may alias back to low frequency and hence it is especially important to avoid it. Hence an interpolator with strong stopband ripple is likely to need to be used in conjunction with oversampling and filtering before resampling. By the way when we resample at an unpredictable and varying rate (such as using a tabreadWhatever~ with an input signal that is not a ramp), oversampling and filtering is not feasible (or is it???)