Reviews with all double blind testing?


In the July, 2005 issue of Stereophile, John Atkinson discusses his debate with Arnold Krueger, who Atkinson suggest fundamentally wants only double blind testing of all products in the name of science. Atkinson goes on to discuss his early advocacy of such methodology and his realization that the conclusion that all amps sound the same, as the result of such testing, proved incorrect in the long run. Atkinson’s double blind test involved listening to three amps, so it apparently was not the typical different or the same comparison advocated by those advocating blind testing.

I have been party to three blind testings and several “shootouts,” which were not blind tests and thus resulted in each component having advocates as everyone knew which was playing. None of these ever resulted in a consensus. Two of the three db tests were same or different comparisons. Neither of these resulted in a conclusion that people could consistently hear a difference. One was a comparison of about six preamps. Here there was a substantial consensus that the Bozak preamp surpassed more expensive preamps with many designers of those preamps involved in the listening. In both cases there were individuals that were at odds with the overall conclusion, and in no case were those involved a random sample. In all cases there were no more than 25 people involved.

I have never heard of an instance where “same versus different” methodology ever concluded that there was a difference, but apparently comparisons of multiple amps and preamps, etc. can result in one being generally preferred. I suspect, however, that those advocating db, mean only “same versus different” methodology. Do the advocates of db really expect that the outcome will always be that people can hear no difference? If so, is it the conclusion that underlies their advocacy rather than the supposedly scientific basis for db? Some advocates claim that were there a db test that found people capable of hearing a difference that they would no longer be critical, but is this sincere?

Atkinson puts it in terms of the double blind test advocates want to be right rather than happy, while their opponents would rather be happy than right.

Tests of statistical significance also get involved here as some people can hear a difference, but if they are insufficient in number to achieve statistical significance, then proponents say we must accept the null hypothesis that there is no audible difference. This is all invalid as the samples are never random samples and seldom, if ever, of a substantial size. Since the tests only apply to random samples and statistical significance is greatly enhanced with large samples, nothing in the typical db test works to yield the result that people can hear a difference. This would suggest that the conclusion and not the methodology or a commitment to “science” is the real purpose.

Without db testing, the advocates suggest those who hear a difference are deluding themselves, the placebo effect. But were we to use db but other than the same/different technique and people consistently choose the same component, would we not conclude that they are not delusional? This would test another hypothesis that some can hear better.

I am probably like most subjectivists, as I really do not care what the outcomes of db testing might be. I buy components that I can afford and that satisfy my ears as realistic. Certainly some products satisfy the ears of more people, and sometimes these are not the positively reviewed or heavily advertised products. Again it strikes me, at least, that this should not happen in the world that the objectivists see. They see the world as full of greedy charlatans who use advertising to sell expensive items which are no better than much cheaper ones.

Since my occupation is as a professor and scientist, some among the advocates of double blind might question my commitment to science. My experience with same/different double blind experiments suggest to me a flawed methodology. A double blind multiple component design, especially with a hypothesis that some people are better able to hear a difference, would be more pleasing to me, but even here, I do not think anyone would buy on the basis of such experiments.

To use Atkinson’s phrase, I am generally happy and don’t care if the objectivists think I am right. I suspect they have to have all of us say they are right before they can be happy. Well tough luck, guys. I cannot imagine anything more boring than consistent findings of no difference among wires and components, when I know that to be untrue. Oh, and I have ordered additional Intelligent Chips. My, I am a delusional fool!
tbg
Post removed 
I have a huge problem with the concept of DBT, with regards to trying to determine the differences or lack there of, with audio products. Maybe I'm just slow but, I often have to live a piece of gear for awhile before I can really tell what it can and cannot do.
DBT is great for something like a new medicine. However, it would be worthless if you gave the subjects one pill, one time. The studies take place over a period of time. And that is the problem with DBT in audio. You sit a group of people in front of the setup. They listen to a couple of songs, you switch whatever component and then play a couple of songs. That just doesn't work. The differences are often very subtle and can't be heard a first.
Which, of course, is the dilemma of making a new purchase. You have to base your decision on short listening periods.
The concept of a DBT for a audio component is great. But, I have yet to see how a test would be set up that would be of any value. Looking a test results based on swapping components after short listening periods would never influence my buying decisions. I wouldn't care how large the audience was or how many times it was repeated. Anymore than I would trust a new drug that was conducted with a one pill dose.
Agaffer, I agree. I have participated in DBTs several times and have found hearing differences in such short term to be difficult, even though after a long term listening to several of the units, I clearly preferred one.

I think the real question is why do short-term comparisons with others yield "no difference" results while other circumstances yield "great difference" results. Advocates of DBT say, of course, that this reveals the placebo effect in the more open circumstances where people know what unit is being played. I think there are other hypotheses, however. Double blind tests over a long term with no one else present in private homes would exclude most alternative hypotheses.

The real issue, however, is whether any or many of us care what these results might be. If we like it, we buy it. If not, we don't. This is the bottom line. DBT assumes that we have to justify our purchases to others as in science; we do not have to do so.
DBT as done in audio has significant methodological issues that virtually invalidate any results obtained. With improper experimental design methodology, any statistics generated are suspect. Regularly compounding the statistical issues is sample size, usually quite small, meaning that the power of any statistics generated, even if significant, is quite small, again meaning that the results are not all too meaningful. Add to this the criticism that DBT, as done so far in audio, might be introducing its own set of artifacts that skew results, and we have quite a muddle.

I'm not at all opposed to DBT, but if it is to be used, it should be with a tight and valid experimental design that allows statistics with some power to be generated. Until this happens, DBT in audio is only an epithet for the supposed rationalists to hurl at the supposed (and deluded) subjectivists. Advocates of DBT have a valid axe to grind, but I have yet to see them produce a scientifically valid design (and I am not claiming an encyclopedic knowledge of all DBT testing that has been done in audio).

More interestingly, though, what do the DBT advocates hope to show? More often than not, it seems to be that there is not any way to differntiate component A (say, the $2.5K Shudda Wudda Mega monster power cord) from component B (stock PC)or component group A (say, tube power amps)from component group B (transistor power amps). Now read a typical subjectivist review waxing rhapsodically on things like, soundstage width and height, instrumental placement, micro and macrodynamics, bass definition across the sepctrum, midrange clarity, treble smoothness, "sounding real," etc., etc. Can any DBT address these issues? How would it be done?

You might peruse my posts of 8/13/05 and 8/14/05 about a power cord DBT session, carried out, I think, by a group that were sincere but terriby flawed in how they approached what they were trying to do to get an idea of how an often cited DBT looks when we begin to examine critically what was done.

http://forum.audiogon.com/cgi-bin/fr.pl?fcabl&1107105984&openusid&zzRouvin&4&5#Rouvin
So, Rouvin, if you don't think all those DBTs with negative results are any good, why don't you do one "right"? Who knows, maybe you'd get a positive result, and prove all those objectivists wrong.

If the problem is with test implementation, then show us the way to do the tests right, and let's see if you get the results you hope for. I'm not holding my breath.