We Need To Talk About Ones And Zeroes


Several well-respected audiophiles in this forum have stated that the sound quality of hi-res streamed audio equals or betters the sound quality of traditional digital sources.

These are folks who have spent decades assembling highly desirable systems and whose listening skills are beyond reproach. I for one tend to respect their opinions.

Tidal is headquartered in NYC, NY from Norwegian origins. Qobuz is headquartered in Paris, France. Both services are hosted on Amazon Web Services (AWS), the cloud infrastructure services giant that commands roughly one third of the world's entire cloud services market.

AWS server farms are any audiophile's nightmare. Tens of thousands of multi-CPU servers and industrial-grade switches crammed in crowded racks, miles of ordinary cabling coursing among tens of thousands of buzzing switched-mode power supplies and noisy cooling fans. Industrial HVAC plants humming 24/7.

This, I think, demonstrates without a doubt that audio files digitally converted to packets of ones and zeroes successfully travel thousands of miles through AWS' digital sewer, only to arrive in our homes completely unscathed and ready to deliver sound quality that, by many prominent audiophiles' account, rivals or exceeds that of $5,000 CD transports. 

This also demonstrates that digital transmission protocols just work flawlessly over noise-saturated industrial-grade lines and equipment chosen for raw performance and cost-effectiveness.

This also puts in perspective the importance of improvements deployed in the home, which is to say in the last ten feet of our streamed music's multi-thousand mile journey.


No worries, I am not about to argue that a $100 streamer has to sound the same as a $30,000 one because "it's all ones and zeroes".

But it would be nice to agree on a shared-understanding baseline, because without it intelligent discourse becomes difficult. The sooner everyone gets on the same page, which is to say that our systems' digital chains process nothing less and nothing more than packets of ones and zeroes, the sooner we can move on to genuinely thought-provoking stuff like, why don't all streamers sound the same? Why do cables make a difference? Wouldn't that be more interesting?

devinplombier

So I asked Google AI "how does TCP/IP correct for packet loss and corruption"?

UDP/IP does none of these things. I have added italics:

TCP/IP uses mechanisms like checksums, sequence numbers, and retransmission to ensure reliable data delivery, handling both packet loss and corruption. TCP detects missing or corrupted packets and requests their retransmission, ensuring that data arrives in the correct order and without errors. 

Here’s a more detailed breakdown:

1. Error Detection (Checksums):

  • TCP uses a checksum to verify the integrity of data during transmission. The sender calculates a checksum based on the data and sends it along with the data. The receiver recalculates the checksum and compares it to the received value. If they don’t match, it indicates data corruption, and the receiver requests retransmission from the sender. 

2. Sequence Numbers and Acknowledgements:

  • TCP uses sequence numbers to track the order of packets. If a packet is lost or arrives out of order, the receiver can use these sequence numbers to detect the issue and request retransmission.
  • The receiver sends acknowledgements (ACKs) back to the sender, indicating which packets it has received successfully. If the sender doesn’t receive an ACK within a certain timeframe, it assumes a packet is lost and retransmits it. 

3. Retransmission:

  • If a packet is lost or corrupted, the sender will retransmit it, ensuring that the receiver eventually gets all the data it needs. 
  • The sender also implements timers to ensure that lost or corrupted packets are retransmitted within a reasonable time. If the timer expires without an ACK, the sender retransmits the packet. 

4. Flow Control:

  • TCP employs flow control to prevent the sender from sending data faster than the receiver can handle. This helps avoid packet loss due to buffer overflows on the receiver’s end. 

5. Congestion Control:

  • TCP also includes congestion control mechanisms to avoid network congestion, which can lead to packet loss. These mechanisms help regulate the rate at which data is transmitted, preventing the network from becoming overloaded. 

In summary: TCP/IP uses a combination of error detection (checksums), sequence numbers, acknowledgements, retransmission, flow control, and congestion control to ensure reliable data delivery, handling both packet loss and corruption. 

@devinplombier 

together with all the AI-generated nonsense @richardbrand has been cut-and-pasting in this thread

Google AI has access to all the nonsense on the web, including yours, but it is intelligent at processing the entire content.  If the consensus it reaches agrees more with my understanding than yours, think perhaps that you just may not be entirely right.

What I have tried to do here is to back up three assertions with the best sources of information you just might believe, one being Wikipedia and the other Google AI.

You labelled this topic "We Need To Talk About Ones And Zeroes" and we clearly do, because there is so much misinformation surfacing here.

My backed-up assertions are

  1. Streaming does not guarantee packet delivery nor bit-perfect accuracy
  2. Ethernet on its own does not guarantee packet delivery nor packet accuracy nor packet timing
  3. USB when used for streaming does not guarantee error-free delivery

I understand why audiophiles who have committed to streaming might react in horror to these assertions.  I urge you to do your own research with an open mind.

There are much better formats than 2-channel PCM, after all!

Ahhh, the power of buffers and asynchronous output.

What I think many are missing is the idea that streamers are not like the phone of old, with effectively one solid circuit between the caller and listener.    There are two separate processes going on at once.  The part that feeds the buffer and the part that doles out the end result.

For pre-recorded media the buffer/bucket can be 30s big or bigger.  The part that gets the stream feeds it into the bucket ahead of time.  The idea is to have enough time that when the TCP/IP stream says "I’m missing packets" or "I have a broken connection" it has time to communicate back to the source and re-request the missing data or start the stream again.  It’s a subtle science here in making the guess as to what the best strategy is to get things going again and when to declare surrender.    If you stream music in your car you have no idea how much your phone is relying on these buffers to get you through the bridge without interruption.  laugh

Here I monitor my Internet access very closely and have failover Internet so when my cable Internet goes away my cellular Internet takes over.  The process takes about 10-20 seconds.  I can tell you that this has happened repeatedly and this has not affected my music.  Of course there are severe Internet events which eventually stop everything but the power of my streamers to driver right over those bumps is a testament to how resilient this whole process is.

While the feeder is busy fixing up the missing data your DAC or TV still has those 30s of data to offer you, so hopefully the stream gets fixed before the bucket is empty. 

We keep talking about noise.  Ethernet is naturally galvanically isolated to a few hundred volts.  It has to be.  Fiber of course is as well.  The bigger issue in my mind are either surges which are high enough to break through that isolation or your Ethernet cables leaking into your AC or interconnects.

Noise in network transmission is not additive.  My router doesn't add noise to the signal from the cable provider.  Any noise is between my router and the next device down.  Reliability is additive, but not noise.  If you have an Ethernet cable capable of 1GigE with no packet loss and no jitter then congratulations, you've done all you can.

The issue that then remains is jitter on the output.  The input and output streams share a buffer and making sure that they do not interfere with each other is another subtle science, which I’m sure is now tackled by a variety of low level libraries available for the particular microprocessor your streamer uses.  I’m not saying they are all equivalent, but that how well your streamer handles this matters. 

My backed-up assertions are

  1. Streaming does not guarantee packet delivery nor bit-perfect accuracy
  2. Ethernet on its own does not guarantee packet delivery nor packet accuracy nor packet timing
  3. USB when used for streaming does not guarantee error-free delivery

At this level, absolutely nothing guarantees error free delivery. This is a straw-man argument where one picks and chooses what they feel is error free or not.

The question is, is it audible?  I'd say no.  What you hear is the quality of the streamer's performance and the original stream.  There are legitimate arguments to be made about any given device's jitter performance or DAC reproduction and that's about all.

So enjoy your cheapie cables. 

Having said this, having heard horror stories of lightning traveling down the cable modem and taking out multiple PCs, TVs and other devices at once without leaving any visible damage I use medical grade Ethernet isolators at the end of long (30') runs.