Why do digital cables sound different?


I have been talking to a few e-mail buddies and have a question that isn't being satisfactorily answered this far. So...I'm asking the experts on the forum to pitch in. This has probably been asked before but I can't find any references for it. Can someone explain why one DIGITAL cable (coaxial, BNC, etc.) can sound different than another? There are also similar claims for Toslink. In my mind, we're just trying to move bits from one place to another. Doesn't the digital stream get reconstituted and re-clocked on the receiving end anyway? Please enlighten me and maybe send along some URLs for my edification. Thanks, Dan
danielho
OOps almost forgot, in your test how did you verify that all samples sent were received and stored? If you sent 500,000 samples, did 500, samples get stored?
All I did was set up a burst of each pattern and a signature pattern (to be used in the little vbasic program I wrote) and had the pattern generator resend the the sequence over and over until I filled up the aquisition memory of the analyzer (It took some time so I did some real work also). However many samples it took to fill up the aquisition memory is how many samples I got. I did not count how many patterns I captured but it was alot! I used a Tektronix TLA704 analyzer so I stored the results on the analyzers hard disk. Tek's analyzer also runs windows so I wrote the Vbasic program right on the analyzer. I did a simple compare of the 5 bursts I sent (16 bits of all 1's, all 0's etc.) until I hit the signature burst and I started the compare over again and repeated the process until the software could not find the signature burst. If I got a situation where it did not compare I incremented a counter in my program. The counter never incremented so there were no errors. To test my software I buggered up my captured data file a bit and the program caught it. It was really an easy experiment but I probably spent more time on it than I should have at work but I trust nobody will tell my boss. As a designer of these types of systems I am sure you have run similar tests. While this is not my forte ( I do DSP design) it sounded like a valid test and you challenged me to the task.
One more thing. The rep rate of the bits in each burst was 10KHz. Ok, I should have set the rate to 40KHz, but I still doubt I would have lost any data as this is a pretty low frequency for these cables or at least 2 of the 3 cables.
Multiply that by 16 bits per word Gmkowal. I guess for the fourth time, the issue is not dropping bits, but that timing errors (not bit errors) cause harmonic distortion at the output of the DAC chip. Your investigations should focus on this phenomena - ie. varying the jitter at input (while leaving the bits the same) and measure the change in harmonic distortion at the output.
I'm still missing something, but I've been known to be dense before - my take on timing errors would be that they cause bits to be misinterpreted - since sender and receiver have subtly different experiences with their independent clocking mechanisms, the receiver interprets things slightly askew and gets a different "answer" than the sender sent. The result would be different bits fed into the DAC than if there was a perfect transfer. Is there something to "timing errors" beyond this that I'm missing?