How is more than 16 bits better for CD?


A response to Roadcykler's question made me wonder about a related topic... If the data on standard CD's is encoded as 16bit, how can an 18, 20 or 24 bit DAC improve things? That is, if the waveform of 16 bit audio is made up using 65,536 levels, where do these extra bits come in? Does the DAC 'guess' the extra bits?
carl109
but, to answer the original question, if it's just a 24 bit DAC, and it does not upsample the data, then the last 8 bits will just be filled in with zeroes, as Aball mentioned in the first reply.

Exactly. Thanks to Jason for clarifying completely.

Dither is used when decreasing bit depth and not really directly related to the original question (but important in making a 16 bit CD from higher bit studio recordings using Sony SBM or other dither techniques)

So the extra bits of a 24 bit DAC playing 16 bit CD data only help improve performance when PROCESSING the signal, such as upsampling and EQ adjustments. It does not improve the dynamic range of the original 16 bit CD data. A higher accuracy (24 bit DAC versus 16 bit DAC) will reduce artifacts from "rounding/truncation" in the processing. (Upsampling 44.1 Khz data being a processing step).

This is not to be confused with the inherent benefits of upsampling itself, which allow for better output filtering of the out of band noise with less artifacts on the in band signal. Another processing step that is often done both digitally and with an analog filter.

See Nika Aldrich, "Digital Audio Explained" fot the Audio Engineer for more details.
Shadorne wrote:

" So the extra bits of a 24 bit DAC playing 16 bit CD data only help improve performance when PROCESSING the signal, such as upsampling and EQ adjustments. It does not improve the dynamic range of the original 16 bit CD data. A higher accuracy (24 bit DAC versus 16 bit DAC) will reduce artifacts from "rounding/truncation" in the processing. (Upsampling 44.1 Khz data being a processing step)."

I never thought of the truncating process that is surely ocurring during writing on a CD down to 16/44.1Khz from original 24/96 khz recording. It seems the increased word length and upsampling is trying to 'simulate' the Original signal that was truncated. Very interesting!!

Am I interpreting correctly?
Do any of the chip manufacturers even currently make 16bit chips for audio use?
My understanding is that the extra bits are to increase the accuracy of the most significant bit since a 16-bit converter cannot, in practice, do a 100% linear conversion.