Roman Haefeli wrote:
hi
i had a discussion with a teacher today. the topic was the smallest delay-time possible. in his opinion one sample is the atom of signal and cannot be divided anymore. in my opinion it should be possible to get shorter delays than 1 samples with interpolation. my argument was: it should be possible to set the values of each sample so, that the resulting signal would be similar to a digitized analogue signal with a shorter than 1 sample delay.
Does anybody know a good explanation for this problem?
1. the granularity of a digital delay is *always* 1 sample.
2. the length of 1 sample depends on the sample-rate
3. there are techniques that allow changing the sample-rate in digital domain
4. to get a higher resolution you can apply such techinque: upsampling this can be done with interpolation (of course this should be non-linear but convolution with a sinc()))
5. the delay-resolution will still be 1 sample
6. while the delay-resolution in your original digital signal would be 1/44100 seconds (at 44.1kHz), in your upsampled signal you might get a resolution of 1/88200 seconds (upsampling by a factor of 2)
7. when downsampling the signal again to the original fs, you will keep the fractional delay (assuming that your sample-resolution is not limited (for instance to 16 bit))
8. conclusion: both of you are right ! in digital domain you can only delay by 1 sample; the trick is, that you can change the sample-rate (!) without any data-loss (assuming your upsampling) and thus producing delays that are not integer multiples of the original sample-length; but the delay is still bound to 1 sample (but of another fs)
i konw, this haven't got anything to do with pd. but i think you are the right people to ask. by the way: if it is possible, how would a realization in pd look like?
you can do upsampling in pd with the [block~] object (see doc/3.audio-examples/J08.up.downsampling.pd) however, the up-/downsampling algorithms are very bad; the best you get is linear interpolation, so you might want to do some filtering before and after the re-sampling.
and you can do sample-wise delay with [z~] (as others have mentioned)
mfg.asd.r IOhannes
hi,
IOhannes m zmoelnig wrote: ...
in digital domain you can only delay by 1 sample; the trick is, that you can change the sample-rate (!) without any data-loss
because the information is lost already... btw. if signal comes from stored tables, there is no need for upsampling -- a delay between two phasors may be as small as the resolution of float numbers permits
k