Why Do So Many Audiophiles Reject Blind Testing Of Audio Components?


Because it was scientifically proven to be useless more than 60 years ago.

A speech scientist by the name of Irwin Pollack have conducted an experiment in the early 1950s. In a blind ABX listening test, he asked people to distinguish minimal pairs of consonants (like “r” and “l”, or “t” and “p”).

He found out that listeners had no problem telling these consonants apart when they were played back immediately one after the other. But as he increased the pause between the playbacks, the listener’s ability to distinguish between them diminished. Once the time separating the sounds exceeded 10-15 milliseconds (approximately 1/100th of a second), people had a really hard time telling obviously different sounds apart. Their answers became statistically no better than a random guess.

If you are interested in the science of these things, here’s a nice summary:

Categorical and noncategorical modes of speech perception along the voicing continuum

Since then, the experiment was repeated many times (last major update in 2000, Reliability of a dichotic consonant-vowel pairs task using an ABX procedure.)

So reliably recognizing the difference between similar sounds in an ABX environment is impossible. 15ms playback gap, and the listener’s guess becomes no better than random. This happens because humans don't have any meaningful waveform memory. We cannot exactly recall the sound itself, and rely on various mental models for comparison. It takes time and effort to develop these models, thus making us really bad at playing "spot the sonic difference right now and here" game.

Also, please note that the experimenters were using the sounds of speech. Human ears have significantly better resolution and discrimination in the speech spectrum. If a comparison method is not working well with speech, it would not work at all with music.

So the “double blind testing” crowd is worshiping an ABX protocol that was scientifically proven more than 60 years ago to be completely unsuitable for telling similar sounds apart. And they insist all the other methods are “unscientific.”

The irony seems to be lost on them.

Why do so many audiophiles reject blind testing of audio components? - Quora
128x128artemus_5

mikelavigne
1,658 posts
04-29-2021 3:19pm
i've challenged blind testing advocates to show me a system that equals or exceeds the performance of my system using only blind testing as a system building method.



That does not even make sense.
The notion that blind testing for audio is an absolute test is absurd, and on so many levels. There is abundant literature (although not enough) on the frailty and limitations of blind testing in all matters of research. (That doesn’t mean that blind testing doesn’t have its place in audio, but it’s useless for most audiophiles.


No, there is not abundant literature that says blind testing is bad. You will have a hard time finding any.  There is literature that deals with bad testing that is blind, but not the basic concept of blind testing.  Every example given in this thread claims to show blind testing is bad, but not one of the actually does. 

djones51
3,869 posts
04-29-2021 3:12pm
It's really a depressing question. Why do so many people reject/fear science?

To quote Disney, "because when everyone is super, no one is super".    Bonus points if you can identify the reference without Google.


I volunteered for an ABX speaker wire test at Klipsch HQ back in '06. The first five rounds, I was perfect. 5 for 5 identifying the more expensive wire versus the lamp cord.  

My accuracy, as the test continued, began to deteriorate, as my ears desensitized to the source material and it all began to blur together, hearing the same small segment of the same musical passage over and over again. I finished the test 13/20. So I barely did better than a coin flip on the last 15.
 

13/20 across a range of test subjects would be statistically significant, but this point to bad test design, and not any error in blind testing. The result actually had nothing to do with blind testing at all, but an ABX test where listener fatigue set in. Any good analysis of results would also look at grouping to determine if there was a listener fatigue element. This goes back to the opacity of testing, all results and methods should be published.
@cleeds 

I’ve had similar experiences as an ABX subject. I still think blind testing has value, even though it’s not likely to be of much use to audiophiles.

Disagree.  The more discerning ears of the audiophile are far more useful in ABX tests.  I pointed out the criticisms I had for the way audio ABX tests are conducted.  It doesn't defeat the utility value of audio ABX tests, just points toward some changes in approach that would increase their utility value.

I'll grant you, there are some audiophiles out there who will never be convinced by the most perfectly conducted ABX testing.  And most aspects of an audiophile's system cannot be easily ABX'ed, at least not at home.  One can ABX a source, such as a CD player, fairly easily, as most preamps have multiple inputs and can easily be switched between them.  Interconnects are a bit more challenging, and the nearly impossible test is ABX-ing a power cable, because now we're into having multiple amplifiers and some sort of switching device between them in order to verify a difference in sound between two power cables. 

Which is, again, why I do my best not to piss on people who choose to spend their money on these sorts of upgrades.  Without a serious A/B, never mind A/B/X test, there's no way to prove them right or wrong.  I prefer to spend my money on things that will demonstrably improve my system.  Maybe once my room is as close to perfect as possible, I've swapped out the crossovers and the tweeters on my speakers, I've found the right cost/benefit balance on my speaker wire and interconnects, and am satisfied with the signal chain of DAC/preamp/amp I've installed, I'll consider playing around with last-mile stuff like that.  But probably not.