Difference in sound between copper and silver digital cables?


Is there a difference in sound between copper and silver digital cables, or purely in the implementation?
pmboyd
I tend to doubt that anyone measures or publishes cable performance. If it doesn’t make sense it’s not true.
@wig  How does the cable you made compare, head-to-head, with the Inakustik?
Williewonka, the first layers of ethernet don't have checksum and you still need a shitty cable to fail at the physical or data link layer.

Statistics on my server (current uptime 27 days) : 
Iface     MTU   RX-OK RX-ERR RX-DRP RX-OVR   TX-OK TX-ERR TX-DRP TX-OVR Flg
lo     65536 2420653     0     0 0     2420653     0     0     0 LRU
eth0   1500 95001512     0     0 0     122699833     0     0     0 BOPRU
It's probably a bit messy to read here, but you can see there are no TX or RX errors at interface level (eth0 is the physical interface, ignore the 'lo' interface, that's the loopback aka 127.0.0.1).
Anyway, no fails on +200 million packets (RX and TX combined) on a 1000baseT connected with a 5$ Cat5e cable (length 5m).

PS : USB has a hardware checksum test, but USB audio/video devices use isochronous communication to avoid latency (thus no hardware checksum for that).
@danip4 - I’m far from an expert on this topic, but this thread appears to show Checksum’s are employed in Ethernet...

https://networkengineering.stackexchange.com/questions/37492/how-does-the-tcp-ip-stack-handle-udp-ch...

What I can confirm is that using Ethernet, conveys a significantly better audio result than any of my asynchronous mode transfers i.e. USB, SPDIF or Optical. Each of those required the best cable possible to even approach the same levels of sound quality that Ethernet provides.

So I think my point still stands - the quality of sound from a digital source that uses asynchronous transfer protocols can be impacted by the quality of the cable used

Regards

Williewonka, the TCP/IP stack is software not hardware.

I used to be CCNA certified (never renewed because it costs money for nothing and I don't work in ICT anymore).
I am not very good at this but I will try to explain the important things (related to this topic).

Ethernet is made out of layers (check OSI model) and only the first layers are hardware layers.

The first layer is the physical layer (interface port and the copper).

The second is the data link (ethernet frames are created and use of a mac address, it's the last hardware layer) while this layer has CRC it will NEVER resend a package, lost is lost. The server interface stats I linked come from this layer. Typical layer 2 devices are ethernet switches.

Starting at layer 3 it's all software. It's here that TCP/IP packages are created and eventually resend. If you would look at the TCP/IP statistics you will see that packages are dropped, blocked, ... etc. This doesn't mean there was a error in the hardware communication, if it reaches this layer "there were no errors on the cable" but packages were dropped/blocked for a different reason (unexpected or unwanted packages)

I am not 100% sure about netstat in Windows but if I am not mistaken "netstat -e" shows the hardware stats and "netstat -s" the TCP/IP part.
There you can see that the first one has no errors (unless you are on wifi, have a bad cable or interface) while the second one has errors, dropped packages ... etc (again those are non physical reasons).

Yes I lied (partly) because I didn't want to explain the whole thing in depth. Layer 2 has a CRC but It never resends like what most ppl here would expect from a system using a checksum. But in 27 days, my server never had a bad ethernet frame (every '0' and '1' has arrived correctly to the next node, the switch, and not single CRC trip).

PS : ethernet is asynchronous there is no clock signal between nodes.