How can different CAT5/6 cables affect sound.


While is is beyond doubt that analog cables affect sound quality and SPDIF, TOSlink and AES/EBU can effect SQ, depending on the buffering and clocking of the DAC, I am at a loss to find an explanation for how different CAT5 cables can affect the sound.

The signals over cat5 are transmitted using the TCP protocol.  This protocol is error correcting, each packet contains a header with a checksum.  If the receiver gets the same checksum then it acknowledges the packet.  If no acknowledgement is received in the timeout interval the sender resends the packet.  Packets may be received out of order and the receiver must correctly sequence the packets.

Thus, unless the cable is hopeless (in which case nothing works) the receiver has an exact copy of the data sent from the sender, AND there is NO timing information associated with TCP. The receiver must then be dependent on its internal clock for timing. 

That is different with SPDIF, clocking data is included in the stream, that is why sources (e.g. high end Aurenders) have very accurate and low jitter OCXO clocks and can sound better then USB connections into DACs with less precise clocks.

Am I missing something as many people hear differences with different patch cords?

retiredaudioguy
Post removed 

Once, one of those I-trust-my-ears guys was going on about how he didn’t like the sound of fibre Ethernet cables:

He claimed they sounded "glassy".

Glassy as in... fibre? Fiber? Fiberglass?? You can’t make that up.

@retiredaudioguy to answer your question, no, you are not missing anything - any components, cables, tweaks, etc. in the Ethernet chain (that is, everything upstream of the network streamer) have zero bearing on sound quality; because, as you correctly stated, the TCP protocol itself is error-free.

So error-free, as a matter of fact, that your streamed music travelled thousands of miles from Qobuz or Tidal servers, through countless data farms from which twee audiophile accoutrements are conspicuously absent, yet arrived at your home thoroughly unscathed and without a single bit out of place.

It is still advisable to use SFP (fiber) for the last run of cable into the streamer, to ensure proper galvanic isolation of the audio system.

At the risk, of course, of making it sound "glassy"! 😂🤣🤣

 

@retiredaudioguy 

The signals over cat5 are transmitted using the TCP protocol

They don’t have to be!  There is another Internet Protocol called User Datagram Protocol (UDP) which like TCP runs on Internet Protocol (IP).  UDP is often used for streaming where it is more important to keep something flowing than to ensure accuracy.  Note that TCP cannot guarantee how long it will take to get a correct packet to its destination.  Think about that!

So TCP/IP is perfect for file transfers, and is the reason that software transmitted over the internet retains the exactly the same number of bugs at each end, provided you are prepared to wait!

Moving down the chain, Ethernet is a low-level protocol that by itself guarantees neither delivery nor timing, which is unsurprising because it does not guarantee delivery of any data packet at all.  In effect it just throws packets into the air and hopes that the right receiver catches them.

Ethernet is a development of the Aloha radio network built for data communication between the Hawiian islands before the advent of satellites and undersea cables.  It is an example of Carrier-sense multiple access with collision detection (CSMA/CD).  Multiple Access means there is no central controller, any device can blast the airwaves with data packets.  To avoid two devices obliterating each other’s packet, each device must make sure the airwaves are clear before transmitting (Carrier-sense). 

But this alone is not enough. Two devices on distant islands can each sense that the airwaves are free, and transmit simultaneously with the result that the signal is scrambled in between. Two conditions must be satisfied to correct for this.

Firstly, after transmitting, each device must also listen to ensure the airwaves are still clear.  If not, there has definitely been a collision (collision detection) and the device must wait a randomised time before trying again.  This randomised time is initially 0 or 1 periods, but if another collision is detected, the number of possible wait periods is doubled and so on in exponential progression.

The second condition is that every message must be long enough to ensure collisions are detected even by the most separated, furthest flung islands.

There is no way of knowing if the intended receiver is even on-air unless a higher-level protocol like TCP/IP is used on top of Ethernet.

So do audio protocols always use TCP/IP?  A big no.

I2S for example was designed in 1957 to allow two chips on a board to pass 2-channel 16-bit PCM data.  It has no, that is zilch, error detection let alone error correction.

How about USB then?  While USB can carry TCP/IP it has a special mode for streaming. Remember, streaming requires a near constant stream of packets. So in streaming mode, USB does not implement re-transmission of faulty packets.

Unlike Ethernet, USB does have a central controller, which polls every connected device to see if it wants to transmit.  As I understand it, there can only be one root USB controller per box which polls every device.

Qobuz claims to use TCP/IP but to do this with streaming content, the Qobuz app(lication) must itself implement the computer code for acknowledging packet receipt, waiting doe missing packets and assembling the received packets back into the correct order.  Qobuz must therefore have an app installed on the last digital device in your chain to ensure accuracy.  Even then, it cannot guarantee timing across the mess of the Internet in order to avoid dropouts.

There is a properly engineered networking stack called the Open Systems Interconnect (OSI) which defines seven protocol layers.  The Internet on the other hand has grown like topsy and only has four layers.  Most of its ’standards’ are just Requests for Comment (RFC).

Silver disks and HDMI for me!

My point is actually stronger than TCP being error free, it is that the submission of the buffered data to the DAC chips is totally isolated from the nature of the patch cords.  The data is stored in a RAM buffer and is fed to the DAC circuits by a clock in the DAC, so I am a loss as to what is causing people to hear sonic differences.

I just did an experiment, I started PRESTO streaming through my entry level Bluesound device which is wired to my LAN.  After playing the stream for a few minutes  - I pulled the LAN cable from the Bluesound node.  The music continued for perhaps 20 seconds.  The streamer is buffering about 20 seconds worth of bits, I tested that it is the streamer by repeating the exercise but pulled the TOSLink, there was an almost immediate cessation of music.  

 

OOPS.  Thanks Richard brand.  Streamers usually use UDP which does not have error correction, so a really bad patch cord could cause data errors.