It's your streamer, not your modem


So many discussions I've seen lately have been about upgrading Internet devices, especially the modems and routers to get the best possible audio.  Audiogoners are talking about installing 10 GigE (10 Gigabits per second) cable for signals that barely need 10 megabits per second.  Three full orders of magnitude more bandwidth than required by hi resolution audio.  (192 k/24 bit)

I've also seen discussions about home Internet getting a little higher latency and jitter.

None of this should matter with a decent streamer.  Let me give you an example.  Because my work requires me to be online with high reliability I have two different Internet providers and a switch that detects failure in one and switches me to another.

It takes the switch approximately 40 seconds to detect the Internet is down and fail over to the other.  40 seconds.  40,000 milliseconds. For this testing I shut the modem off.  In that moment, for the next 40 seconds, I had no working Internet.  Then my back-up 5G Internet took over.  About 3 minutes after that my primary Internet's modem has rebooted and my router has recognized it as available and switched back over.

During the testing I coincidentally had Roon playing a random Jazz selection.

Not once did my audio stop.  Not even a hiccup.

Why?  Buffering.  Roon had gotten the entire song and doled it out to my end point a little at a time. 

Point is, modem quality, router quality, switches, and Ethernet cables don't matter that much.  What does is the size of the buffer and the effectiveness of the anti-jitter circuitry in the DAC.

I do by the way recommend shielded cables, Ethernet isolators and gas discharge surge protectors, but sweat a modem or router?  Not me.

erik_squires

Buffering and caching are two entirely different concepts.
Every device that intakes a data stream from network or ISP will buffer that data stream. It is done to create a stable consistent flow of data irrespective of the incoming speed. If it’s too fast, the buffering regulates it down to the data flow that meets the requirements of the downstream processing. Same with slower speeds…it will buffer the data up until the requirements are met. Buffer size us typically not that large and is dictated by the design and the needs of the downstream components/processors.The data is not maintained in the buffer for too long.

Caching is typically a much larger data store (could be in memory or SSD) that caches the result set and stores it. Depending on the design, most of the critical processing may actually happen from the cache and that could result in cleaner downstream processing because some of the impacting entities like noise, etc had already been taking care of. Reading and processing data from cache in this case is similar to reading a CD. You can perform the remaining processing and further purify the data when it is converted to either the USB or SPDIF outbound signal that the DAC will understand.

If you pull the Ethernet cable out of your streamer that utilizes buffering only, you will probably get about 30s to a minute of play.
In the case of caching, depending on the defined size of the cache and what’s been cached, you might be able to listen to the entire album or the entire playlist.

In my experience, the impact of Ethernet tweaks is less evident with streamers that use caching. For example, with the Aurender you can A/B two different Ethernet cables by caching the same song twice, version A with cable A and version B with your second cable. It would be impossible to do with the buffering only streamers. Data in the cache will remain until it rolls of due to capacity or you clear the cache manually.

Comparing a ripped file to Qobuz or Tidal might also be a fool’s errand. You can never possibly know and ensure that the two versions are the exact same master. So the difference in sound may be attributed not only to CD vs. Streaming but also to the version of the streamed album you’re comparing your cd or cd rip to. The most critical link in the chain is immediately before the DAC. If you’re using a server, what happens on that server as long as it’s properly sized, is far less critical.

Quality of the streamer is extremely important. You can never make a mediocre streamer sound like a great one by adding gadgets, no matter how expensive they are. Just like you can never soup up a Civic to perform like a proper sports car, no matter what you change or add to it (not knocking on Civics…great cars).

Just adding my $0.05 here…

@audphile1

Let me help you.  Go to Wikipedia and look up "Transmission Control Protocol" and  then in there search for "flow control." I'd link it but A'gon's firewall is blocking URLs

You are confounding  "buffering" with "flow control."  There is almost no buffering at all in network devices between say Netflix and your TV.  None.  What there is, which you describe, is flow control, which limits the amount of data on the wire at any given time to prevent packets from being dropped.  This is entirely negotiated by the endpoints.  There are no mini-caches strategically placed around the Internet just in case your Internet provider is congested. 

Buffering is to use a cache for the sake of preventing interruptions via playback.  This is entirely an endpoint thing.

audphile1

Buffering and caching are two entirely different concepts.

They are different, but not in the way you explain.

@erik_squires is correct. Here's the link he's suggesting.

Who said anything about caches being strategically placed throughout internet? I was talking about a cache in the streamer. 

From the same Wikipedia article….

Flow control: limits the rate a sender transfers data to guarantee reliable delivery. The receiver continually hints the sender on how much data can be received. When the receiving host’s buffer fills, the next acknowledgment suspends the transfer and allows the data in the buffer to be processed.

I didn’t word it exactly as the wikepedia article did but the concept I explained is exactly that. You are splitting hairs.