Why are digital streaming equipment manufacturers refusing to answer me?


I have performed double blind tests with the most highly regarded brands of streamers and some hifi switches. None have made any difference to my system on files saved locally. I have asked the following question to the makers of such systems and almost all have responded with marketing nonsense. 
My system uses fiber optic cables. These go all the way to the dac (MSB). Thus no emi or rfi is arriving at the dac. On top of this, MSB allows me to check if I receive bit perfection files or not. I do. 
So I claim that: if your dac receives a bit perfect signal and it is connected via fiber optic, anything prior to the conversion to fiber optic (streamers, switches, their power supplies, cables etc) make absolutely no difference. Your signal can’t be improved by any of these expensive pieces of equipment. 
If anyone can help explain why this is incorrect I would greatly appreciate it. Dac makers mostly agree, makers of streamers have told me scientific things such as “our other customers can hear the difference” (after extensive double blind testing has resulted to no difference being perceived) and my favorite “bit perfect doesn’t exist, when you hear our equipment tou forget about electronics and love the music”!
mihalis
@audio2design  Below is a journal article you might find interesting.  It explores a mechanism behind the empirical paradox that people can show a reliable preference between two stimuli but fail to discrimination between them on an ABX discrimination test (here referred to as triangle testing). 

I will note that the reason they identify in this case is actually "the statistical properties of the decision rules followed in different tasks."  I still suspect that raw preference judgments are more sensitive than discrimination judgments but that was not the driving factor for differences in this case.

https://link.springer.com/article/10.3758/BF03205304




Thank you cal3713, I do remember that paper ages ago. I will point directly to the conclusion:

Our main conclusion, however, is that it is not necessary to invoke any advantage for hedonic judgments to explain our earlier results. These, and the new results here, are just particular instances of the advantage in statistical power that Ennis(1990) shows forced-choice methods to have over triangular tests. More consistent judgments are a consequence not of greater sensitivity to hedonic differences but to the statistical properties of the decision rules followed in different tasks.

I believe I noted above that ABX was statistically more robust.
There are a lot of great points made here on both sides of the argument but I think as with anything many people are looking at things too microscopically and need to zoom out and take a macro view at what is actually happening with "streaming" on a network.

One thing no one seems to bring up in these back-and forths is the essential architecture of a network which consists of layers. Sometimes you might hear about these layers in jargon like "stack" "full stack", or other such lingo.

If one happens to be a competent "full stack" software engineer, then the challenges of figuring out whether or not the data are/is good is largely irrelevant.

What audiophiles fail to realize is that vast amounts of opportunity for mishandling/misinterpretation of "1's and 0's" can, and often do happen at the final few stages in the process (presentation, application) of translating the "digital" signal information into meaningful use by your device.

The fact is that not all "streamers", let alone music player software, are created equal, and there are many poor ways of going about it in fact.

From a technical standpoint, the data received by a streamer from a network is the exact same data another streamer can receive on a network.

Where the conversation gets tricky is what you are using to render the data and how it handles the various software processes to de-code the information (1's and 0's).

As an analogy, gamers spend varying amounts of dough on better graphics processors. No computer engineer would argue that the raw game data being received by two different GPUs is different; you can perform a hash/checksum to verify the data is all there.

Equally, no one will argue that two different GPUs will and can produce different results when finally rendered to your monitor (not to mention the monitor has it's own internal processing to deal with to receive the data).

What's so funny to me about "science only" audiophiles is they don't tend to think about the actual science much.

Its audio not rocket science. If you want bit perfect you can have bit perfect and you can do it very inexpensively.
Its audio not rocket science. If you want bit perfect you can have bit perfect and you can do it very inexpensively.
Sure you can. But how is the "bit-perfect" data being translated and rendered?

If I am using a player software with digital EQ and a DAC or interface like the MSB still shows that it's "bit-perfect" data, shouldn't that tell you something?

As a visual analogy, if I switch between color space outputs on my video streaming device to send a different color gamut to a capable display over HDMI, the signal to the display is still "bit perfect". Yet, if the display is capable, it will result in a different color space. If I set the wrong color space for the source material, the image(s) will end up altered.