The problem with the just 1s and 0s is that it simply doesn't hold up in practice. To repeat a story I have alluded to before, years ago a large Japanese CD pressing firm send [I think] HiF News some different pressings of the same CD, some with standard material and some with a mix of materials that would cost slightly more and which they were attempting unsuccessfully to get the record companies to adopt. There was such a huge difference in sound between them that they had to download them into a computer to see if the data was the same. It was exactly the same, if digital is so foolproof what made the difference? The laser system is a mechanical one and constitutes a change from analog to digital, they are not ones and zeros but REPRESENT ones and zeros in the same way the groves in an LP represents sound waves. Many mechanical factors can interfere with the ability to correctly read the pits and translate them into digital signal.
Does the quality of a digital signal matter?
I recently heard a demonstration where a CD player was played with and without being supported with three Nordost Sort Kones. The difference was audible to me, but did not blow me away.
I was discussing the Sort Kones with a friend of mine that is an electrical engineer and also a musical audio guy. It was his opinion that these items could certain make an improvement in an analogue signal, but shouldn't do anything for a digital signal. He said that as long as the component receiving the digital signal can recognize a 1 or 0 then the signal is successful. It's a pass/fail situation and doesn't rely on levels of quality.
An example that he gave me was that we think nothing of using a cheap CDRW drive to duplicate a CD with no worry about the quality being reduced. If the signal isn't read in full an error is reported so we know that the entire signal has been sent.
I believe he said that it's possible to show that a more expensive digital cable is better than another, but the end product doesn't change.
There was a test done with HDMI cables that tested cables of different prices. The only difference in picture quality was noted when a cable was defective and there was an obvious problem on the display.
I realize that the most use analogue signals, but for those of us that use a receiver for our D/A, does the CD players quality matter? Any thoughts?
I was discussing the Sort Kones with a friend of mine that is an electrical engineer and also a musical audio guy. It was his opinion that these items could certain make an improvement in an analogue signal, but shouldn't do anything for a digital signal. He said that as long as the component receiving the digital signal can recognize a 1 or 0 then the signal is successful. It's a pass/fail situation and doesn't rely on levels of quality.
An example that he gave me was that we think nothing of using a cheap CDRW drive to duplicate a CD with no worry about the quality being reduced. If the signal isn't read in full an error is reported so we know that the entire signal has been sent.
I believe he said that it's possible to show that a more expensive digital cable is better than another, but the end product doesn't change.
There was a test done with HDMI cables that tested cables of different prices. The only difference in picture quality was noted when a cable was defective and there was an obvious problem on the display.
I realize that the most use analogue signals, but for those of us that use a receiver for our D/A, does the CD players quality matter? Any thoughts?
- ...
- 67 posts total
Stanwal, the problem that you are describing is properly ascribed to the DAC or the output stage, just like Shadorne said above (twice). Trust me, if the 0's and 1's were getting messed up, it would not be subtle, but horrible--like the sound of a skipping CD. Your computer would tell you that it can't read that CD, etc. Horrible, catastrophic errors--those probably come from a bad reading of the 0s and 1s. Subtler, "audiophillic" errors--those probably come from an inferior DAC or output stage. |
To the OP: Take it from another EE; your friend is correct. Everything that matters as to signal quality occurrs before and after the digitization. You cannot do anything about what happens before the signal is digitized so if you want to improve quality concentrate on what happens afterwards. Moving the digital signal around is not a place to spend any effort. |
It's not just 0s and 1s.....page 54 |
Error checking and correction is very loose in CDPs since it has to read CD in real time (dust, scratches). There are programs like MAX for Mac that read music CD as data going multiple time to the same sector until finds right checksum. My CDP plays and Itunes rips CDs that MAX refuses to read or reads extremely long time. Digital data from CDP is jittery (contains jitter - noise in time domain). Jitter creates sidebands at very low level (in order of <-60dB) but audible since not harmonically related to root frequency. With music (many frequencies) it means noise. This noise is difficult to detect because it is present only when signal is present thus manifest itself as a lack of clarity. Jitter can be suppressed by asynchronous upsampling DACs (like Benchmark DAC1) or reclocking devices. Jitter depends on quality of CDP transport and power supply. Typical digital transition of CDP is in order of 25ns making it susceptible to noise (slow crossing of threshold). High quality transports can transition many times faster reducing noise coupling but creating a lot of problems with reflections on cable characteristic impedance boundaries (therefore require better digital cable). Jitter in D/A playback can be suppressed but recorded jitter in A/D process stays forever. For some early A/D conversions the only option is to convert it again if analog tapes still exist. |
- 67 posts total