On Sun, Oct 23, 2011 at 04:04:58PM -0400, Mathieu Bouchard wrote:
I understand all of that already, but my impression is that it's more like making a 24-bit gradient use dithering so that it looks more like a 48-bit gradient. Would it make a perceptual improvement if you did so ?
No, of course not -- such a difference, though measurable, would fall below a human's perceptual threshold. But truncate over and over again, and eventually, the error accumulates and rises above threshold.
It's hard to hear the first pass of a perceptual codec. But run audio through a codec multiple times, and you get a "cliff edge" effect: nothing... nothing... nothing... oh wow now I hear it.
Truncation distortion, being enharmonic, is pretty nasty. It's not like analog tape overload. A little truncation distortion goes a long way, and unless you are going for glitch, best practice to keep it at bay by managing gain structure wisely and dithering when appropriate.
E.g. if you have a fully 16-bit-digital volume control on an amp, and the amp has a big volume range and you only use the quiet range, the effective number of bits can down a lot.
It's also not uncommon to capture a killer take under less than ideal recording conditions -- including input gain structure.
It's worthwhile for developers of audio software to think about such things, so that downstream users benefit from the additional headroom.
Marvin Humphrey