Why do digital cables sound different?


I have been talking to a few e-mail buddies and have a question that isn't being satisfactorily answered this far. So...I'm asking the experts on the forum to pitch in. This has probably been asked before but I can't find any references for it. Can someone explain why one DIGITAL cable (coaxial, BNC, etc.) can sound different than another? There are also similar claims for Toslink. In my mind, we're just trying to move bits from one place to another. Doesn't the digital stream get reconstituted and re-clocked on the receiving end anyway? Please enlighten me and maybe send along some URLs for my edification. Thanks, Dan
danielho
Gmkowal I don't believe that you understand the signal transfer in CD playback. Most other digital transactions people have discussed are NOT real-time. CD playback is real-time. The protocol is one way with 44000 transmissions per second. Signal is lost. Bits are dropped. On other real-time digital aplications, say telephony, there is both a retry mechanism if the protocol allows and data recovery. CD protocol has no such provisions. Again rember this is technology that is almost 20 years old. The improvement in technology since the CD standard to the present is greater than the improvement in technology from Mr Bell And Edison to the start of CD technology. I suggest that your claimed knowledge of physics is flawed by your lack of understanding of this signal transfer. Before you go blasting a lot of other people, you'd better know what yo're talking about. As for some backgrtound, I've been working with digital audio on and off for over 30 years. I have spent over 8 years designing commercial digital audio and video and data transfer interfaces. So I'm not someone who has just read a few articles in StereoShill.
Gmkowal just a follow on my previous response. If you don't believe me, prove it yourself. Attach a scope to the output of your DAC. Play some test tones on your transport, ( Use frequencies above 8k that's where the effect starts to be more noticable). Get some different cables and measure the signals that come out of the DAC. I have measured some pretty significant differences. I've always believed if you can measure the difference, it exists.
Blues_man - Given the inadequacies of the CD protocol and your experience in the area of digital audio data transfer interfaces, what are the benefits brought to the environment by using a high-end digital interconnect over a basic, well-built digital interconnect? That was really the question at the beginning of this post, and I think we're all still curious. -Kirk
The problem is really in the transmitter and receiver. Even though there is a "standard", there are always difference in implementation of the hardware. I started out believing that there was no difference in digital cables. That was before I was familiar with the CD standard which isn't very robust. After a while I started measuring lots of cables and interfaces just out of curiosity. Its not easy to determine which cable sounds best with a particular transport / DAC combo. You have to rely on what other people have tried. First definitely goe with AES/EBU interface over RCA cable. It's not that much more. The one I liked best was the MIT reference. Its very expensive, $800, I believe its not worth the money but if cost is no object. I'll be putting the one I tested for sale here soon (at least half off). As I posted in another thread, the best thing is to get a Transport / DAC with a custom interface. I got the Spectral because I thought it sounded best. If you have access to a scope, use the method I suggested above with a high frequency tone. Whichever cable gives the most accurate signal is probably the best.
I understand that the transmission is real time. I just do not believe a cable that is in good working order will cause a bit or two to be dropped. I agree with Blues_Man that the problem is probably in the transmitter or reciever if bits are being lost. I ran a quick test on my 3 digital cables at work today using a logic analyzer ( a scope is not the right tool for a real time test) with a pattern generator and deep memory (16 meg). I simply output a 16 bit burst every .5 seconds. The rep rate within the burst was set to 10KHz. I simply tied the pattern generator's clock to the logic analyzer's clock so that every time the pat gen's clock went high I captured the data until I filled up the memory and saved the data. I tried this with alternating 0's and 1's, all 0's , all 1's, a pattern of 0110111011001111 and it's complement. Once I had captured the data I saved them as ASCII files and I wrote a small visual basic program to look for missing bits in the patterns and found none. I also fed a repeating pattern of 0's and 1's into the cables and terminated the cable with what I approximated was the impedance of my D/A. I looked at the waveforms with a scope and looked for any droop, overshoot and undershoot. The risetime of the pulses appeared to be close on all 3 cables but I did notice some some other differences. I noticed one cable inparticular did cause more overshoot than the rest but when I varied the impedance used to terminate the cable I could minimize the overshoot (probably more capacitance in this cable causing an impedance mismatch). I marked this cable and gave all 3 cables to a another engineer who has a a separate DAC and transport to take home to see if the cables sound any different from one another. I am sorry but I did not hear very much of a difference between the cables to begin, with but I thought this would be a more subjective test. As for real time and loosing bits, the logic analyzer does not lie. I will let you know what he thinks. I can not think of a better way to test the real time data transmission characteristics of a cable. I burned up today's lunch time trying this, tommorrow I think I will have something to eat :-)Thanks for the post