Reviews with all double blind testing?


In the July, 2005 issue of Stereophile, John Atkinson discusses his debate with Arnold Krueger, who Atkinson suggest fundamentally wants only double blind testing of all products in the name of science. Atkinson goes on to discuss his early advocacy of such methodology and his realization that the conclusion that all amps sound the same, as the result of such testing, proved incorrect in the long run. Atkinson’s double blind test involved listening to three amps, so it apparently was not the typical different or the same comparison advocated by those advocating blind testing.

I have been party to three blind testings and several “shootouts,” which were not blind tests and thus resulted in each component having advocates as everyone knew which was playing. None of these ever resulted in a consensus. Two of the three db tests were same or different comparisons. Neither of these resulted in a conclusion that people could consistently hear a difference. One was a comparison of about six preamps. Here there was a substantial consensus that the Bozak preamp surpassed more expensive preamps with many designers of those preamps involved in the listening. In both cases there were individuals that were at odds with the overall conclusion, and in no case were those involved a random sample. In all cases there were no more than 25 people involved.

I have never heard of an instance where “same versus different” methodology ever concluded that there was a difference, but apparently comparisons of multiple amps and preamps, etc. can result in one being generally preferred. I suspect, however, that those advocating db, mean only “same versus different” methodology. Do the advocates of db really expect that the outcome will always be that people can hear no difference? If so, is it the conclusion that underlies their advocacy rather than the supposedly scientific basis for db? Some advocates claim that were there a db test that found people capable of hearing a difference that they would no longer be critical, but is this sincere?

Atkinson puts it in terms of the double blind test advocates want to be right rather than happy, while their opponents would rather be happy than right.

Tests of statistical significance also get involved here as some people can hear a difference, but if they are insufficient in number to achieve statistical significance, then proponents say we must accept the null hypothesis that there is no audible difference. This is all invalid as the samples are never random samples and seldom, if ever, of a substantial size. Since the tests only apply to random samples and statistical significance is greatly enhanced with large samples, nothing in the typical db test works to yield the result that people can hear a difference. This would suggest that the conclusion and not the methodology or a commitment to “science” is the real purpose.

Without db testing, the advocates suggest those who hear a difference are deluding themselves, the placebo effect. But were we to use db but other than the same/different technique and people consistently choose the same component, would we not conclude that they are not delusional? This would test another hypothesis that some can hear better.

I am probably like most subjectivists, as I really do not care what the outcomes of db testing might be. I buy components that I can afford and that satisfy my ears as realistic. Certainly some products satisfy the ears of more people, and sometimes these are not the positively reviewed or heavily advertised products. Again it strikes me, at least, that this should not happen in the world that the objectivists see. They see the world as full of greedy charlatans who use advertising to sell expensive items which are no better than much cheaper ones.

Since my occupation is as a professor and scientist, some among the advocates of double blind might question my commitment to science. My experience with same/different double blind experiments suggest to me a flawed methodology. A double blind multiple component design, especially with a hypothesis that some people are better able to hear a difference, would be more pleasing to me, but even here, I do not think anyone would buy on the basis of such experiments.

To use Atkinson’s phrase, I am generally happy and don’t care if the objectivists think I am right. I suspect they have to have all of us say they are right before they can be happy. Well tough luck, guys. I cannot imagine anything more boring than consistent findings of no difference among wires and components, when I know that to be untrue. Oh, and I have ordered additional Intelligent Chips. My, I am a delusional fool!
tbg
Is it just me or is the word 'synergy" the magic word that enables anyone to justify ANY componet, even if a blind test yeilds results that suggest the peice of equipment wasnt worth the money?
I know that everything has to come together to get your toes' tappin, but it seems the word Synergy is often used as a safe word....and all opinions and bets are off.
It reminds me of the not so distant past when pornography was trying to be defined and the conclusion was "I cant define it, but I know it when I see it".....IMO that is way to vague. I guess it all boils down to each person and the sound they hear, to each his own, but as long as the word synergy is used...alot of folks should refrain from putting down anothers choices, (tubes-vs-solid state, digital-vs-vinyl..and so on, because synergy wins every time.
What seems to be beyond audiophiles is that the only criteria of blind testing is that the participant has no information but the presented experience. Those who think blind testing is conceptually flawed have to answer a question: If what is desired is an unbiased review of sound quality, how does product information promote that?

Since "synergy" (I hate that word) is a factor in any stereo/component review, why bring it up as a factor for blind testing? The same situation exists with time. How is time a factor for bind testing but for not "sighted" testing?

I hate to ring this bell, but the drugs everyone takes...blind testing. Like a million psychology experiments...blind testing. Scientists made eliminating bias work for them - audiophiles haven't, but still some think they know better.
I am somewhat unhappy that I spoke of J.A. in my post as he brings along a lot of baggage. Many of you who have posted above seem sincerely to believe that better conceived db tests would yield recommendations of some components or cables. My reading of what I have seen posted is that many of those advocating db testing expect a conclusion that says there are no differences and thus buy the cheapest. This seems to have been J.A.'s experience in the 3 amp comparison, but in my limited experience such comparisons with db do yield a recommendation, as in the Bozak instance.

Fundamentally, I have no confidence in same/different comparisons in db with too small a sample and with too much dependence on statistical significance tests. A conclusion that all amps are the same or that all cables are the same is just to at odds with my experience to be acceptable. Perhaps when you randomly assign some to the drug and others to the placebo, double blind testing makes research design sense. But I do not concede that db testing is the fundamental essence of the scientific method. Experimentally, a control group design makes sense but double blind testing seldom is necessary. Often it takes great originality to cope with subjects knowing they are being experimented on. The Hawthorne Electric study is the best example of this.

I also really wonder how A, B, and C comparisons of amps, etc. using double blind would be done and reported. How would the random sample be drawn , and where would they assemble? And would we need to assess the relationship between more qualified listeners and others?

There are some reviewers whose opinions I am responsive to as they have previously said things consistent with what I hear. With double blind testing there would be no reviewers I presume.
I agree "synergy" is an overused word, but for Blind Testing, what would be your reference amp, preamp, source, speakers, wire, ect.? Would the reference be what the manufacturer prefers, you prefer or I prefer? In the world of science there are set standards, but what are the set standards in the Audio world? We can measure db, distortion, ect., but in the Audio world there is not a perfect standard for what sounds the best to you or I. A HONEST reviewer would be much appreciated in this dishonest world we live in.
Tbg: The main question of your post seems to be, Do objectivists like Arny Krueger extol blind tests only because they like the results? The short answer is no. Arny K. and his ilk did not invent blind tests as a weapon to use against the high-end industry. In fact, they did not invent blind tests at all. Blind listening tests were developed much earlier by perceptual psychologists, and they are the basis for a huge proportion of what we know about human hearing perception (what frequencies we can hear, how quiet a sound we can hear, how masking works to hide some sounds when we hear others, etc.). Blind tests aren’t the only source of our knowledge about those things, but they are an essential part of the research base in the field.

Folks in the audio field, like Arny, started using blind tests because of a paradox: Measurements suggested that many components should be sonically indistinguishable, and yet audio buffs claimed to be able to distinguish them. At the time, no one really knew what the results of those first blind tests would be. They might have confirmed the differences, which would have forced us to look more closely at what we were measuring, and to find some explanation for those confirmed differences. As it turned out, the blind tests confirmed what perceptual psychologists would have predicted: When two components measured differently enough, listeners could distinguish them in blind tests; when the measurements were more similar (typically, when neither measured above known thresholds of human perception), listeners could not distinguish them.

Do all blind tests result in a “no difference” conclusion? Of course not, and you’ve cited a couple of examples yourself. Your preamp test, for one. (Even hardcore objectivists agree that many preamps can sound different.) Arny’s PCABX amp tests, for another. (Note, however, that Arny typically gets these positive results by running the signal through an amp multiple times, in order to exaggerate the sonic signature of the amp; I don’t believe he gets positive results when he compares two decently made solid state amps directly, as most of us would do.)

Your comments on statistical significance and random samples miss an important point. If you want to know what an entire population can hear, then you must use a random sample of that population in your test. But that’s not what we want to know here. What we want to know here is, can anybody at all hear these differences? For that, all we need to is find a single test subject when can hear a difference consistently (i.e., with statistical significance). Find ANYBODY who can tell two amps apart 15 times out of 20 in a blind test (same-different, ABX, whatever), and I’ll agree that those two amps are sonically distinguishable.

Which leads to a final point. You say you are a scientist. In that case, you know that quibbling with other scientists’ evidence does not advance the field one iota. What advances the field is producing your own evidence—evidence that meets the test of reliability and repeatability, something a sighted listening comparison can never do. That’s why objectivists are always asking, Where’s your evidence? It’s not about who’s right. It’s about getting at better understanding. If you have some real evidence, then you will add to our knowledge.