24 bit/44.1?


The digi out on my B&K phono 10 is supposedly ( according to the manual) a 24bit/44.1 signal? Does this make sense ? BTW - whatever it is -sounds great thru my Becnchmark DAC-1.
Ag insider logo xs@2xstonedeaf
That's interesting!

Yeah, 24-bit word-length (bit depth) can be used with any sampling rate, that's just an uncommon/interesting choice. Those 8 bits mean a much higher dynamic range capability and a much lower noise floor.

Even their manual says "for CD quality recording." If it really is 24 bit, it's even better than CD quality. Puzzling indeed.
It is different but honeywell computers used to use what was called a bit and half to a bite. The bites were 12 bit's so that they could have a higher transfer information if I remember correctly. But it been a long time since I played with 1's and 0's.
The SPDIF protocol provides locations in each subframe for 24 bits. They might simply (and misleadingly) be basing their statement on that, and setting the 8 least signficant bits to 0, or they might really be using some or all of those bits. There's no way to tell without more information than is provided in the manual (which I took a look at, btw).

Regards,
-- Al
I personally prefer upto 20 bits maximum, 24 bits is not as musical as 20 bits. 24 bits are more detailed and slightly bassy but lacks mid range. My 20 Bits Monarchy Dac out performed many 24/192 DACs musically, 20 bit is very musical.
Interesting that you mentioned the Monarchy DAC. Here is a review from Lynn Olson that explains some of the benefits of using 20 bit versus 16 bit, at least in this implementation:

http://www.positive-feedback.com/Issue25/monarchy_m24.htm

some interesting facts about bit resolution.
Dseanm - It has more to do with the type of converter than number of bits. Dac in your Monarchy (PCM63 - now discontinued) is traditional DAC with laser trimmed resistor divider while most of 24/192 DACs (if not all) are Delta-Sigma. There is nothing wrong wit either approach - just different sound. Some people believe that Delta-Sigma is bad for audio and you can even find statement that Burr-Brown placed in the PCM63 datasheet saying that Delta-sigma are so noisy that they cannot even read lower three bits. Same company short time after made PCM1794 That has 6 highest bits of tradditional DAC and 18 lowest bits of Delta-Sigma. It is funny that they don't use word Delta-Sigma but "Advanced Segmented" instead. There is nothing wrong with Delta-Sigma and even SACD is byproduct of Delta-Sigma modulator before filtering. Same for DSD recording. Like everything else it is subjective and in your case you're not a fan of Delta-Sigma technology (or high oversampling, or digital filtering).
Post removed 
why would you think the 8 least significant bits would be zeros rather than the most significant?

Hi Bob,

The "least significant bit," as you may realize, corresponds to the smallest resolution increment, while the "most significant bit" corresponds to the largest.

For example, if the maximum possible value ("full scale") at the analog output is 2 volts, on the digital (SPDIF) output a logic "1" on the msb would indicate that the corresponding analog output is greater than 1 volt. The next most significant bit would have a weight of 1/2 that amount, so a 1 on the two most significant bits would indicate a value of greater than 1.5 volts. Etc. The least significant bit in a 24 bit word would have a weight of 2volts/2^24 (two volts divided by 2 to the 24th power), which is 0.000000119 volts.

So setting the 8 least significant bits to 0 would introduce very miniscule inaccuracy, while setting the msb's to 0 simply would not work.

For further confirmation of this, see the section in the middle of this page defining the time slots in the AES/EBU and SPDIF subframes:

http://en.wikipedia.org/wiki/AES/EBU

Regards,
-- Al
Post removed 
Yes, that looks right, Bob. Basically the 16 bit number is being left-shifted 8 places, which is equivalent to multiplying by 2^8 as you indicated.

The conversion from a 16-bit to a 24-bit representation is exact, but of course the additional 8 bits of resolution that a true 24-bit system would provide are lost.

Anything less than infinite resolution in a digital system can be thought of as a small noise component being added to the signal, and in fact is referred to as quantization noise. Which obviously is greater in the case of the left-shifted 16 bits than for a true 24-bit a/d.

Regards,
-- Al