24 bit/44.1?


The digi out on my B&K phono 10 is supposedly ( according to the manual) a 24bit/44.1 signal? Does this make sense ? BTW - whatever it is -sounds great thru my Becnchmark DAC-1.
Ag insider logo xs@2xstonedeaf
I personally prefer upto 20 bits maximum, 24 bits is not as musical as 20 bits. 24 bits are more detailed and slightly bassy but lacks mid range. My 20 Bits Monarchy Dac out performed many 24/192 DACs musically, 20 bit is very musical.
Interesting that you mentioned the Monarchy DAC. Here is a review from Lynn Olson that explains some of the benefits of using 20 bit versus 16 bit, at least in this implementation:

http://www.positive-feedback.com/Issue25/monarchy_m24.htm

some interesting facts about bit resolution.
Dseanm - It has more to do with the type of converter than number of bits. Dac in your Monarchy (PCM63 - now discontinued) is traditional DAC with laser trimmed resistor divider while most of 24/192 DACs (if not all) are Delta-Sigma. There is nothing wrong wit either approach - just different sound. Some people believe that Delta-Sigma is bad for audio and you can even find statement that Burr-Brown placed in the PCM63 datasheet saying that Delta-sigma are so noisy that they cannot even read lower three bits. Same company short time after made PCM1794 That has 6 highest bits of tradditional DAC and 18 lowest bits of Delta-Sigma. It is funny that they don't use word Delta-Sigma but "Advanced Segmented" instead. There is nothing wrong with Delta-Sigma and even SACD is byproduct of Delta-Sigma modulator before filtering. Same for DSD recording. Like everything else it is subjective and in your case you're not a fan of Delta-Sigma technology (or high oversampling, or digital filtering).
Post removed 
why would you think the 8 least significant bits would be zeros rather than the most significant?

Hi Bob,

The "least significant bit," as you may realize, corresponds to the smallest resolution increment, while the "most significant bit" corresponds to the largest.

For example, if the maximum possible value ("full scale") at the analog output is 2 volts, on the digital (SPDIF) output a logic "1" on the msb would indicate that the corresponding analog output is greater than 1 volt. The next most significant bit would have a weight of 1/2 that amount, so a 1 on the two most significant bits would indicate a value of greater than 1.5 volts. Etc. The least significant bit in a 24 bit word would have a weight of 2volts/2^24 (two volts divided by 2 to the 24th power), which is 0.000000119 volts.

So setting the 8 least significant bits to 0 would introduce very miniscule inaccuracy, while setting the msb's to 0 simply would not work.

For further confirmation of this, see the section in the middle of this page defining the time slots in the AES/EBU and SPDIF subframes:

http://en.wikipedia.org/wiki/AES/EBU

Regards,
-- Al