How can different CAT5/6 cables affect sound.


While is is beyond doubt that analog cables affect sound quality and SPDIF, TOSlink and AES/EBU can effect SQ, depending on the buffering and clocking of the DAC, I am at a loss to find an explanation for how different CAT5 cables can affect the sound.

The signals over cat5 are transmitted using the TCP protocol.  This protocol is error correcting, each packet contains a header with a checksum.  If the receiver gets the same checksum then it acknowledges the packet.  If no acknowledgement is received in the timeout interval the sender resends the packet.  Packets may be received out of order and the receiver must correctly sequence the packets.

Thus, unless the cable is hopeless (in which case nothing works) the receiver has an exact copy of the data sent from the sender, AND there is NO timing information associated with TCP. The receiver must then be dependent on its internal clock for timing. 

That is different with SPDIF, clocking data is included in the stream, that is why sources (e.g. high end Aurenders) have very accurate and low jitter OCXO clocks and can sound better then USB connections into DACs with less precise clocks.

Am I missing something as many people hear differences with different patch cords?

retiredaudioguy

In most networks you are receiving all of the data and it is bit perfect. And if your service is using TCP, not IP, then it is for sure bit perfect. But that is not the big issue, noise is the big issue. If CAT 6 offered perfect shielding, then there wouldn’t be CAT 8. My system sounded great with a 30’ BJC cable, until I replaced it with optical. You don’t notice the noise until it’s gone, and thus begins your cable journey. 

If noise, picked up by non-shielded Cat5 cables is the problem, and from responses it seems that is might well be the issue, would that argue that a Wi-Fi (and thus decoupled) connection would work best?

I cannot personally comment on this as I rarely listen to streaming sources on my big rig (I hate to say reference system) and it does not support Wi-Fi anyway, and my 2nd system has only Wi-Fi capability.

My wife is a wonderful supporter of my high end journey (she supported my upgrade from a K-01xs to the xd) but would probably be critical of a 30 foot cable draped around the room from my WAP/Switch to the Bluesound node - but I might try it one day when she is out at the gym as I have a Cat5 cable building kit. 

The actual transfer of the bits seems unlikely to be the issue as the highest bit rate for streaming would appear to be 18.423 Mbps  (2*192k*24*2) (two channels*sample rate*bit depth*2 for the TCP and other overhead) and the Cat5 standard is for 100Mbps.

 

This is curious, optical reportedly superior to LAN cable, this my experience as well. So some will say this imagination, others will trust their senses. And then we have reports the transceivers in these optical devices don't all sound alike. Now we also have reports managed audiophile switches provide superior sound via decreasing network traffic. This all very curious, networks have impact on sound quality, networks have no impact on sound quality, take your pick. I suspect this thread could carry on and on with no conclusive evidence on either side, this thread will not provide the final word.

A question for people who do hear a difference between cables.

Is your streamer integrated with the DAC or is there a link to an external DAC?

The answer to this might provide insights regarding the induced noise theory.

Another test for those who run a long cable from the switch to the streamer, though this will cost about $25.  Try running the long cable to a Netgear MiniSwitch located as close as possible to the streamer, with the shortest possible (shielded?) cable from the MiniSwitch to the streamer, minimizing any RFI pickup, and hopefully, the MiniSwitch will isolate the output from any noise on the input.

My big rig's network connections are implemented by a Wi-Fi to ethernet adapter connected to a MiniSwitch and then very short cables to the Aurender server and the Bluesound Vault that I use for ripping and streaming.  (The Aurender does not support Presto). 

Anyone who holds the mistaken belief that cables supporting TCP transmissions, routers, network switches and the like can make a difference in sound quality should be required to read the following before opining further:

http://units.folder101.com/cisco/sem1/Notes/ch7-technologies/encoding.htm

Followers of the "all-digital-is-analog" superstitions won’t likely read past the first page, so the TLDR is that TCP protocols guarantee error-free data delivery regardless of the vector on which it’s transmitted, thereby effectively abstracting the physical layer.

That said, copper cables transmitting TCP can indirectly affect sound quality by hosting parasitic noise. As others have confirmed, this is easily solved by using SFP (fiber) in the last run of cabling going into your streamer. This prevents any ground noise from reaching your system by galvanically isolating it (fiber is non-metallic, therefore non-conductive, therefore it does not pick up or transmit EMI or RFI).

Best practice is to keep all Ethernet gear in a utility closet / room at a remove from the listening room, wall warts and SMPS-powered computers and what not included (keep them on a separate AC circuit from your system), and run SFP fiber from your switch to your streamer. This way what happens in the utility room stays in the utility room, and inky black noise floors are yours.