How can different CAT5/6 cables affect sound.


While is is beyond doubt that analog cables affect sound quality and SPDIF, TOSlink and AES/EBU can effect SQ, depending on the buffering and clocking of the DAC, I am at a loss to find an explanation for how different CAT5 cables can affect the sound.

The signals over cat5 are transmitted using the TCP protocol.  This protocol is error correcting, each packet contains a header with a checksum.  If the receiver gets the same checksum then it acknowledges the packet.  If no acknowledgement is received in the timeout interval the sender resends the packet.  Packets may be received out of order and the receiver must correctly sequence the packets.

Thus, unless the cable is hopeless (in which case nothing works) the receiver has an exact copy of the data sent from the sender, AND there is NO timing information associated with TCP. The receiver must then be dependent on its internal clock for timing. 

That is different with SPDIF, clocking data is included in the stream, that is why sources (e.g. high end Aurenders) have very accurate and low jitter OCXO clocks and can sound better then USB connections into DACs with less precise clocks.

Am I missing something as many people hear differences with different patch cords?

retiredaudioguy

In playback (like a DAC converting to analog): jitter can smear the timing of the reconstructed waveform, producing subtle distortion or loss of clarity.

There is an important difference between digital music files and other computer data files. This would be time domain errors rather than the absolute correctness of the data file. With music files, there is less time to correct errors during the process of converting digital music files into analog sound waves. As other have mentioned, there is nothing wrong with the music file data; the differences are in the timing of when a binary 1 becomes a binary 0 versus when that should actually happen based on the original recording. There is probably a valid debate as to whether those differences are audible & probably a debate as to whether some people can hear those errors and other people cannot.

My real-world experience listening to ethernet cables including an AmazonBasics Cat 6 cable, Supra Cat 8 cable, and a ethernet cable from a Chinese company via Amazon is that all sounded different.  

What I have heard from the Supra cable versus the AmazonBasics cable is:

- More presence to voices and instruments which sound more forward and distinct in presentation

- Richer tonality

- Less grain to the sound

- Better resolution due to a lower noise floor (This is audible when comparing a cleaner signal to one that is less clean)

- Easier to follow bass lines

- Pace seems faster due to more clarity and better definition to the leading edge of notes.

 

In terms of potential bias in my listening, the Supra cable was something that I could return at no cost. I was interested in hearing:

1. Whether there was an audible difference?

2. The magnitude of that audible difference?

3. Whether that audible difference was worth the $60 that changing to the Supra Cat 8 ethernet cable would cost me versus sticking with the $8 AmazonBasics cable.

@calvinandhobbes  The theory that jitter 'smears' the timing thus 'producing subtle distortion or loss of clarity' is not supported by the actuality of how the USB connection actually operates. S/PDIF connections, fiber or coax, operate under similar mechanisms and error rates as USB. If your DAC is connected to your streamer via USB, then USB Isochronous Transfer Mode is used with CRC only, not full error checking. But looking deeper that's not really much of an issue. So bear with me, here's what that means.

At the physical layer (PHY), USB 3.2 allows a BER (bit error rate) is 1 bit in 10^12 bits. (1 bit per 1.2 GBytes) This high level of reliability and performance is due to the involvement of several components - the transmitting PHY media: receptacles, cables, and the receiving end. The USB 3.2 specification has built-in protections against bit errors, such as redundancy in packet framing and link commands, along with CRC for detecting multiple bit errors. The error recovery process, which can be hardware or software-driven, kicks in upon detection of integrity issues.

When errors are detected, the hardware or software initiates a re-transmission request, with the host controller repeating this up to three times. Further errors and retries can reduce overall throughput, but a well-designed receiving PHY layer can mitigate this by cleaning up issues and reducing error rates.

At the link layer, within the digital logic of the controller, the BER expectation is even greater: 1 bit in 10^20 bits (1 bit in every 50 million Terabytes). Once the data is in the silicon, errors are minimal. However, if an error is detected, the controller attempts up to three retries.

Isochronous transfers, used for streaming, can afford to lose packets as the next frame of data is always prioritized. But that is exceedingly rare - on the order of one bit every 107 hours hours of 24-bit 192K. This would include any jitter-induced errors. Jitter-induced errors, when they occur would trigger a CRC (cyclic redundancy check) error, and a retry, 

The theory that jitter 'smears' the timing thus 'producing subtle distortion or loss of clarity' is not supported by the actuality of how the USB connection actually operates. S/PDIF connections, fiber or coax, operate under similar mechanisms and error rates as USB.