In defense of ABX testing


We Audiophiles need to get ourselves out of the stoneage, reject mythology, and say goodbye to superstition. Especially the reviewers, who do us a disservice by endlessly writing articles claiming the latest tweak or gadget revolutionized the sound of their system. Likewise, any reviewer who claims that ABX testing is not applicable to high end audio needs to find a new career path. Like anything, there is a right way and many wrong ways. Hail Science!

Here's an interesting thread on the hydrogenaudio website:

http://www.hydrogenaud.io/forums/index.php?showtopic=108062

This caught my eye in particular:

"The problem with sighted evaluations is very visible in consumer high end audio, where all sorts of very poorly trained listeners claim that they have heard differences that, in technical terms are impossibly small or non existent.

The corresponding problem is that blind tests deal with this problem of false positives very effectively, but can easily produce false negatives."
psag
Judging from the AES paper by Olive the listening tests are, in fact, excessively complicated, a criticism he dismisses. Furthermore the listening tests apparently involved only frequency response. What happened to other audiophile parameters such as musicality, transparency, soundstaging ability, dynamics, sweetness, warmth, micro dynamics, pace, rhythm, coherence, to name a few? One supposes testing for those parameters would make the tests way too complicated. Maybe Olive thinks those parameters are too subjective, who knows?
Scientifically valid is the key phrase. There is no agreement in the scientific community with respect to audio tests. In fact, the scientific community could give a rat's behind about audio or audiophiles or testing audiophile devices, any of that. Hel-looo! If someone says he represents the scientific community in any of this controlled blind testing business or any type of testing for that matter, he's just pulling your leg.
"01-17-15: Geoffkait
Judging from the AES paper by Olive the listening tests are, in fact, excessively complicated, a criticism he dismisses. Furthermore the listening tests apparently involved only frequency response. What happened to other audiophile parameters such as musicality, transparency, soundstaging ability, dynamics, sweetness, warmth, micro dynamics, pace, rhythm, coherence, to name a few? One supposes testing for those parameters would make the tests way too complicated. Maybe Olive thinks those parameters are too subjective, who knows?"

You couldn't have said it any better. If you read through the Hydrogen posts, those guys get mad because the reviewer listens to a component and puts what he hears into a review. What else would you have them do? I mean the intended purpose of a piece of audio equipment is to use it to listen to music. The nerve!

Bob_reynolds,

You've stated in the past, in no uncertain terms, that you can look at the specs of a component and tell how it sounds, without listening to it. Do you really expect anyone to believe that you can list all the qualities that Geoffkait states in his post without listening to whatever the component is? Its hard enough to do that when you have the piece in your own listening room.
To properly assess something you have to immerse yourself in it. Moods and attitude can change very often but we still remain who we are. We assess things quickly but it's honed on a casual basis (learning curve) until it becomes second nature to us. By the time we are adults, our senses are mostly perfected on a level necessary to keep us alive, into old age.

Now comes along a hatred of things audio that uses the "scientific" method to deconstruct what we know to be true. The genesis of that hatred can be attributed to many things (envy, the wherewithal to buy, the refusal to relate, my mother ran off with an audio salesman, etc.).

We are constantly debating the manifestation of ABXing and not examining the latency behind it. It's been debunked time and again and yet it keeps rearing it's head turning these forums into another game of whack-a mole as new angles are tried.

All the best,
Nonoise
To assume that the system used for the test is operating perfectly, that it is sufficiently revealing for the specific test, to assume that listeners have sufficiently good hearing and know what they are listening to or for, these are all unknowns. Going on the basic assumption that most audiophile systems are pretty standard sounding, I.e., generic sounding, it wouldn't surprise me one bit that results of blind controlled tests would tend towards obtaining negative or up inconclusive results. Which is actually pretty much what Olive's speaker evaluation showed.