Ha, finally a detailed discussion on this topic, I like it. My replies are inlined.
On Wed, Jun 13, 2012 at 10:27 PM, Matt Barber brbrofsvl@gmail.com wrote:
Hi, I've been going through the vdelayxw code myself. See comments:
On Wed, Jun 13, 2012 at 12:30 PM, katja katjavetter@gmail.com wrote:
On Sat, Jun 9, 2012 at 5:18 PM, Matt Barber brbrofsvl@gmail.com wrote:
Csound has a variable write delay opcode that would be worth looking at - the csound website has just been flagged by google for having malicious content so I can't link to the manual page, but the opcode is called "vdelayxw."
Unfortunately I can not understand the c code of vdelayxw. There's comments for the obvious things but not for the magic numbers and other tricks. But it may be a method for sinc-interpolated resampling.
It almost certainly is some kind of windowed sinc, and you're right about the magic numbers. I don't think you need to know for sure what the exact interpolation scheme is to make sense of it, though; my understanding of it is as follows:
For both the variable read and variable write delay opcodes in csound, one chooses an interpolation window size - say 32 samples.
Now, let's say we're trying to READ from the delay line at sample index 116.33. So we need to interpolate between sample 116 and 117. Given our 32-point interpolation window, the earliest sample that will have an effect on the interpolation is sample 101, and the last one is sample 132, so to find the correct interpolation we need to sum together all the scaled windowed sincs (or whatever convolution kernel is in the interpolation window) for each of those 32 samples, at index 116.33, which gives us our read value.
The write works rather in reverse: if we want to write a sample at index 116.33, then we need to calculate the windowed sinc (or whatever) for the input sample centered on 116.33, and MIX (not overwrite) those values for samples 101-132 into those samples. What emerges, then, becomes the cumulative effect of having interpolated: imagine the next sample written is at index 118.54 - you're going to mix its function into samples 103-134, and the overlap with the previous action is going to cause the interpolation to "work" once those samples reach the read head.
In that way, a variable write into a delay line is somewhat easier conceptually -- if it's done this way -- than a [tabwrite4~] would be, because the way the table is read is predetermined. Nothing is ever read until all the relevant input samples have had a chance to affect the output in the appropriate way.
On the other hand, think of [tabread4~]: it's interpolation scheme is fixed, no matter what resampling factor. With extreme resampling, aliases may be noticeable. But what the hell, it doesn't sound like the original music anyway, when sped up or down to extremes. That is the difference with an offline resampling job, when the original sound must be preserved insofar the new frequency range allows. In that sense, an interpolation scheme like in [tabread4~] could be used for realtime variable speed writing, leaving the consequences for the user. For example, if you make large jumps through the table, many old samples would simply not be rewritten.
But even with interpolation quality requirements so relaxed, it is not by itself clear how the samples should be written. Using sinc-interpolation, each input sample could be written as many samples of a (eventually phase-shifted) sinc function, with amplitude compensation for the overlap. The interpolation scheme of [tabread4~] however can not calculate four output samples based on one input sample, it could only calculate one output sample based on four input samples.
Two points here. The last thing you said is not actually true -- each interpolation scheme has an associated convolution function, which can be calculated by imagining what the interpolation would look like for a single sample whose value was 1.0 surrounded by zeroes everywhere else. This 4-point piecewise function can be used to write four samples in its immediate vicinity the same way that the sinc does in the csound example.
Meaning, there is also a convolution kernel for linear interpolation? How would it look like? Ah, it would be a simple dirac delta, but the point is, the kernel can be applied time-shifted with fractional delay, matching the fraction in the index. By the way, this also holds for sinc-interpolated resampling as described by Julius O. Smith: a linear interpolation in the sinc-table to make the result more precise. Interpolating the interpolation kernel...
It seems the bigger question to me is, if you skip somewhere far in the table, you're going to write four samples, and then another four samples somewhere else. Maybe this is OK, but another way to think of what to do would be to imagine the incoming signal as something you're interpolating over the way you would do when reading from a table, in which case a very large index increment if you're writing could be just like a bunch of very small index increments when you're reading. So say you jump ahead 48 samples - one way to do it would be to write ALL 48 samples as an interpolation over the the two input samples.
That would open up some other problems, like how to interpret the difference between jumping back in a table vs "wrapping back around." Not sure how to deal with that at all (this problem doesn't arise in the delay line version of a variable write because what is represented is always a chunk of time rather than an abstract table of numbers to be used for whatever, so there's no real concept of "wraparound" in the delay-line version).
It would also lead to there not being a good way to "keep writing into index 1.5 of the table" -- the incoming input samples would be interpolated over zero samples of the table, and so nothing would get written.
Imagine how one would do this with a fixed resampling factor. For example with resampling factor 0.75 (downsampling) you would write 64
- 0.75 = 48 samples into the array for every block of 64 input
samples, while incrementing the read index by 1 / 0.75 = 1.3333333. Another example, with resampling factor 1.5 (upsampling) you would write 64 * 1.5 = 96 samples into the array for each block of 64 input samples, while incrementing the read index with 1 / 1.5 = 0.6666666. The perform loop would not iterate over an integer n (= blocksize), but it would just break when the float read index exceeds n. To accommodate for interpolation, and for index increments larger than one, a few samples of fixed delay 'headroom' must be introduced.
This is a good point -- but the problem wouldn't exist if you were writing four samples in the table for every incoming sample.
An interpolation kernel like the sinc function is zero-phase apart from the fractional time shift, so there is always an amount of delay implied, depending on kernel length. Would it be possible to create a minimum-phase kernel? Theoretically, yes.
I'm just not sure in that case if a 4-point cubic interpolation is nearly enough for the kind of upsampling that might need to occur.
In the case of J.O.S.'s sinc table method, the kernel length could be varied continuously, according to instantaneous resampling factor. The window must be calculated separately.
In a [tabwrite4~], resampling factor would follow from index increments calculated from float index values received at the inlet. But what to do with large increments, exceeding the delay 'headroom' at the end of the input buffer? And another question: what to do with very small increments, leading to massive amounts of written samples and possibly to cpu overload?
I'm not sure I understand this - I assume you mean "very small increments in the written table." So lets say you're going to try to write a whole 64-sample input block to between indices 10 and 11 of the table. If you're writing 4 samples each time, what you end up with is not cpu overload, but just four samples with possibly a very high amplitude, depending upon the nature of the signal. And actually, if you think about this with regard to the delay line, this would be what would happen if the sound source were moving toward a microphone at or near the speed of sound, so the "very high amplitude" would in effect be a digital "sonic boom."
Matt
There should be an (optional) amplitude compensation for up- and downsampling, as an amplitude effect would be inconvenient in the case of a variable-speed sound-on-sound looper.
Katja