Can the digital "signal" be over-laundered, unlike money?


Pretty much what is implied by the title. 

Credit to @sns who got me thinking about this. I've chosen a path of refrain. Others have chosen differently.

I'm curious about members' thoughts and experiences on this? 

Though this comes from a 'clocking thread' by no means am I restricting the topic to clocking alone.

Please consider my question from the perspective of all ["cleaning"] devices used in the digital chain, active and passive.

 

From member 'sns' and the Ethernet Clocking thread [for more context]:

 

"I recently experienced an issue of what I perceive as overclocking with addition of audiophile switch with OXCO clock.  Adding switch in front of server, NAS resulted in overly precise sound staging and images."

"My take is there can be an excessive amount of clocking within particular streaming setups.

...One can go [to0] far, based on my experience."

 

Acknowledgement and Request:

- For the bits are bits camp, the answer is obvious and given and I accept that.

- The OP is directed to those that have utilized devices in the signal path for "cleaning" purposes.

Note: I am using 'cleaning' as a broad and general catch-all term...it goes by many different names and approaches.

 

Thank You! - David.

david_ten

Two problems with that argument.  bear in mind i design these things both in T&M and audio.  The first problems i "not all DACs do a perfect re-clocking, and many, so as not to over/under run buffers begin with the input timing and then reduce timing variations".  The second is, like many thigns in audio, listening tests tell me that some issues remain after competent isolation and re-clocking.

 

Now, i have posted several times that when i have built my own USB interface (isolation, power, FIFO, etc.) and applied them to legacy DACs, the dependence on a good input signal is far less.  But its not zero.  You can deduce what you wish - i don't have all the answers, but at least i listen, and then ask questions.

 

Do i think the Ethernet switches make a difference? No i don't. Its isolated anyway and are queued anyhow. (I supposed que handling might matter, i assume that is mapped by the router, maybe not).  but clearly adding a bridge between server and endpoint helps, and a great USB interface helps, and providing a good input signal helps.

 

I do agree that most of the benefit comes from competent design of the USB interface. I also know that most designs are "data sheet engineered" and not ideal.

 

But be careful of blanket statements.  Not only can they mislead given real world equipment, but they turn off people who have heard "its perfect" too many times before, only to find out otherwise (and have the industry fix issues, you know those silly guys at AD and Burr-Brown).

 

Thanks for clarifying. I assumed we were not 100% in agreement, though probably more yes than no.

 

G

 

@itsjustme , virtually all USB made now are async and they are effectively as queued as Ethernet. Bit errors on USB are close enough to 0 to be zero.

There is no "reclocking" in a USB DAC. There is only clocking. Not all are perfect, but if they were not close to perfect, then THD would escalate and even cheap DACs have excellent THD so that argument has little merit unless done poorly which seems to inflict boutique brands more than others (based on Stereophile tests).

What denotes a "properly engineered USB IF". The one issue that regularly comes up, and few deny, is system level noise mainly from the source. Easy solved. Isolate the USB.

You ignored everything i said. I'm not being baited.

 

If baiting is asking you to justify your position to which I pointed out the inconsistencies and errors, then I guess I baited you. If I was baiting you though, i would just say I don't think you can justify many of your statements based on how these products work and inability to define "properly engineered",  though as been often said and does not seem controversial is that non-isolated USB can be a source of power/ground loop noise.