Reviews with all double blind testing?


In the July, 2005 issue of Stereophile, John Atkinson discusses his debate with Arnold Krueger, who Atkinson suggest fundamentally wants only double blind testing of all products in the name of science. Atkinson goes on to discuss his early advocacy of such methodology and his realization that the conclusion that all amps sound the same, as the result of such testing, proved incorrect in the long run. Atkinson’s double blind test involved listening to three amps, so it apparently was not the typical different or the same comparison advocated by those advocating blind testing.

I have been party to three blind testings and several “shootouts,” which were not blind tests and thus resulted in each component having advocates as everyone knew which was playing. None of these ever resulted in a consensus. Two of the three db tests were same or different comparisons. Neither of these resulted in a conclusion that people could consistently hear a difference. One was a comparison of about six preamps. Here there was a substantial consensus that the Bozak preamp surpassed more expensive preamps with many designers of those preamps involved in the listening. In both cases there were individuals that were at odds with the overall conclusion, and in no case were those involved a random sample. In all cases there were no more than 25 people involved.

I have never heard of an instance where “same versus different” methodology ever concluded that there was a difference, but apparently comparisons of multiple amps and preamps, etc. can result in one being generally preferred. I suspect, however, that those advocating db, mean only “same versus different” methodology. Do the advocates of db really expect that the outcome will always be that people can hear no difference? If so, is it the conclusion that underlies their advocacy rather than the supposedly scientific basis for db? Some advocates claim that were there a db test that found people capable of hearing a difference that they would no longer be critical, but is this sincere?

Atkinson puts it in terms of the double blind test advocates want to be right rather than happy, while their opponents would rather be happy than right.

Tests of statistical significance also get involved here as some people can hear a difference, but if they are insufficient in number to achieve statistical significance, then proponents say we must accept the null hypothesis that there is no audible difference. This is all invalid as the samples are never random samples and seldom, if ever, of a substantial size. Since the tests only apply to random samples and statistical significance is greatly enhanced with large samples, nothing in the typical db test works to yield the result that people can hear a difference. This would suggest that the conclusion and not the methodology or a commitment to “science” is the real purpose.

Without db testing, the advocates suggest those who hear a difference are deluding themselves, the placebo effect. But were we to use db but other than the same/different technique and people consistently choose the same component, would we not conclude that they are not delusional? This would test another hypothesis that some can hear better.

I am probably like most subjectivists, as I really do not care what the outcomes of db testing might be. I buy components that I can afford and that satisfy my ears as realistic. Certainly some products satisfy the ears of more people, and sometimes these are not the positively reviewed or heavily advertised products. Again it strikes me, at least, that this should not happen in the world that the objectivists see. They see the world as full of greedy charlatans who use advertising to sell expensive items which are no better than much cheaper ones.

Since my occupation is as a professor and scientist, some among the advocates of double blind might question my commitment to science. My experience with same/different double blind experiments suggest to me a flawed methodology. A double blind multiple component design, especially with a hypothesis that some people are better able to hear a difference, would be more pleasing to me, but even here, I do not think anyone would buy on the basis of such experiments.

To use Atkinson’s phrase, I am generally happy and don’t care if the objectivists think I am right. I suspect they have to have all of us say they are right before they can be happy. Well tough luck, guys. I cannot imagine anything more boring than consistent findings of no difference among wires and components, when I know that to be untrue. Oh, and I have ordered additional Intelligent Chips. My, I am a delusional fool!
tbg
One thing about being over 60 is that the style of thought in society has changed but not yours. When I was a low paid assistant professor and wanted ARC equipment for my audio system, I just had to tell myself that I could not afford it, not that it was just hype and fancy face plates or bells and whistles and that everyone knows there is no difference among amps, preamps, etc. DBT plays a role here. Since it finds people can hear no differences and has the label of "science," it confirms the no difference hopes of those unable to afford what they want. My generation's attitudes no result in criticizing other peoples buying decisions as "delusional."

I certainly have bought expensive equipment whose sound I hated (Krell) and sold immediately and others (Cello) that I really liked. I have also bought inexpensive equipment that despite the "good buy" conclusion in reviews proved nothing special in my opinion (Radio Shack personal cd player). There is a very low correlation between cost and performance, but there are few inexpensive components that stand out (47 Labs) as good buys. This is not to deny that there are marginal returns for the money you spend, but the logic of being conscious of getting your money's worth really leads only to the cheapest electronics probably from Radio Shack as each additional dollar spent above these costs gives you only limited improvement.

DBTesting, in my opinion, is not the meaning of science, it is a method that can be used in testing hypotheses. In drug testing, since the intrusion entails giving a drug,, the control group would notice that they are getting no intrusion and thus could not be benefited. Thus we have the phony pill, the placebo. The science is the controlled random assignment pretest/posttest control design and the hypothesis, based on earlier research and observations of data, that it is designed to answered with the testing.

If we set aside the question of whether audio testing should be dealt with scientifically, probably most people would say that not knowing who made the equipment you hear would exclude your prior expectations about how quality manufacturers equipment might sound. Simple A/B comparisons of two or even three amps with someone responsible for setting levels is not DBT. Listening sessions need to be long enough and with a broad range of music to allow a well based judgment. In my experience, this does remove the inevitable bias of those who own one of the pieces and want to confirm the wisdom of their purchase, but more importantly does result in one amp being fairly broadly confirmed as "best sounding." I would value participation in such comparisons, but I don't know whether I would value reading about such comparisons.

I cannot imagine a money making enterprise publishing such comparisons or a broad readership for them. I also cannot imagine manufacturers willingly participating in these. The model here is basically that of Consumers Reports, but with a much heavier taste component. Consumers Reports continues to survive and I subscribe, but it hardly is the basis of many buying decisions.

My bottom line is that DBT is not the definition of science; same/different comparisons are not the definition of DBT; any methodology that overwhelmingly results in the "no difference" finding despite most hearing a difference between amps clearly is a flawed methodology that is not going to convince people; and finally, that people do weigh information from tests and reviews into their buying decisions, but they also have their personal biases. No mumble-jumble about DBTesting is ever going to remove this bias.
To the doubters of DBT:

Women are fairly recent additions to professional orchestras. For years and years, professional musicians insisted they could hear the difference between male and female performers, and that males sounded better. Women were banished to the audience. The practice ended only after blind listening tests showed that no one could discern the sex of a performer.

Surely, these studies had as many flaws as blind cable comparisons. Probably more, since they involved live performances by individual people, which are inevitably idiosyncratic.

Would the DBT doubters here have been lobbying to keep women out of orchestras even after the tests? Or would they, unlike the professional musicians of the day, never have heard the difference in the first place?
Mankind, believing the bible, ignore massive bones that kept being discovered. Jefferson charged Lewis and Clark to find if such large creatures lived on the Missouri River. Yes, we are all victims of our underlying theories. Darwin explained evolution and we retheorized where such bones might have come from.

What does this have to do with DBTesting? Nothing.
Study proposal:

I don't know if any studies of the following kind have been done. But if not, then one should be done.

Materials: two sets of cheap cables -- cosmetically different, and a set of expensive cables that look just like the cheap ones.

First experiment(s): subjects are introduced to the two sets of cheap cables and told the one is a very expensive $15K cable, the other a $15 cable. Descriptions of each cable, in lavish audiophile prose, are printed on glossy tri-fold with nice pictures, and given to the subjects. The "expensive" cable is praised to the heavens and the "cheap" cable is described modestly.

Then the cables are used (not blind) alternately, to play back a variety of music. Subjects are then asked to rate their listening experiences, both quantitatively, and also qualitatively.

To eliminate the worry about cosmetic differences in the cheap cables making a difference, you could do the test twice, once with cable A being the "cheap" one, and once with cable B being the "cheap" one.

Second experiment(s): do the first experiment but with one expensive cable and one cheap cable that look the same. Do it first by telling the truth about the cables, but then, in the second case, by telling the subjects that the expensive cable is cheap and the cheap cable is expensive.

Here, nothing is bliind. Subjects are all looking at the equipment, and can even observe, from a little distance, the cables being hooked up. But if the DBT guys are right, and it's all hype, we should expect in the first experiment, that the introductions to the cables will lead subjects to favor whatever happens to be described as the more expensive cable, both quantitatively, and in their qualitative descriptions, even though the cables are basically identical cheap cables. In the second experiment, we should expect that when subjects are told the true values of the cables, their judgments favor the more expensive one, but also, that when lied to, they prefer the cheaper cable *just as much* as they preferred the expensive one.

If DBT proponents are wrong, you should expect that subjects will rate the cheap (identical) cables about the same, and that in the second experiment, they will vastly prefer the expensive cable when truthfully described, and when lied to, either still prefer the expensive cable (contrary to what they're being told) or prefer the cheap one, but only by a little.

The point is, we don't need to have people "blind" to do the tests.

And if the cables were manufactured especially for this purpose, you could do the testing through the mail, with in-home trials over a long period of time. Wonder what the results would be?
Pabelson, I completely agree with you with respect to DBT, but then I completely disagree with you that all CD players and amps that are competently designed sound alike.

This is simply not what I hear, and there are good reasons that amps and CD players sound different. Power supplies for one. Good power supplies cost money. Potentiometers in amps ... good ones cost money.

Inexpensive CD players do sound remarkably good these days, and the turntable days of source first are not quite so applicable, but to state that amplifiers are all alike makes me wonder which ones you have has the opportunity to listen to.

No I have not performed DBT on amplifiers, but I have had several occasions where an amplifier that I would have expected to sound excellent (usually on the basis of reviews) sounds markedly inferior to another amplifier that has received much worse reviews, and does so on a range of speakers.