Reviews with all double blind testing?


In the July, 2005 issue of Stereophile, John Atkinson discusses his debate with Arnold Krueger, who Atkinson suggest fundamentally wants only double blind testing of all products in the name of science. Atkinson goes on to discuss his early advocacy of such methodology and his realization that the conclusion that all amps sound the same, as the result of such testing, proved incorrect in the long run. Atkinson’s double blind test involved listening to three amps, so it apparently was not the typical different or the same comparison advocated by those advocating blind testing.

I have been party to three blind testings and several “shootouts,” which were not blind tests and thus resulted in each component having advocates as everyone knew which was playing. None of these ever resulted in a consensus. Two of the three db tests were same or different comparisons. Neither of these resulted in a conclusion that people could consistently hear a difference. One was a comparison of about six preamps. Here there was a substantial consensus that the Bozak preamp surpassed more expensive preamps with many designers of those preamps involved in the listening. In both cases there were individuals that were at odds with the overall conclusion, and in no case were those involved a random sample. In all cases there were no more than 25 people involved.

I have never heard of an instance where “same versus different” methodology ever concluded that there was a difference, but apparently comparisons of multiple amps and preamps, etc. can result in one being generally preferred. I suspect, however, that those advocating db, mean only “same versus different” methodology. Do the advocates of db really expect that the outcome will always be that people can hear no difference? If so, is it the conclusion that underlies their advocacy rather than the supposedly scientific basis for db? Some advocates claim that were there a db test that found people capable of hearing a difference that they would no longer be critical, but is this sincere?

Atkinson puts it in terms of the double blind test advocates want to be right rather than happy, while their opponents would rather be happy than right.

Tests of statistical significance also get involved here as some people can hear a difference, but if they are insufficient in number to achieve statistical significance, then proponents say we must accept the null hypothesis that there is no audible difference. This is all invalid as the samples are never random samples and seldom, if ever, of a substantial size. Since the tests only apply to random samples and statistical significance is greatly enhanced with large samples, nothing in the typical db test works to yield the result that people can hear a difference. This would suggest that the conclusion and not the methodology or a commitment to “science” is the real purpose.

Without db testing, the advocates suggest those who hear a difference are deluding themselves, the placebo effect. But were we to use db but other than the same/different technique and people consistently choose the same component, would we not conclude that they are not delusional? This would test another hypothesis that some can hear better.

I am probably like most subjectivists, as I really do not care what the outcomes of db testing might be. I buy components that I can afford and that satisfy my ears as realistic. Certainly some products satisfy the ears of more people, and sometimes these are not the positively reviewed or heavily advertised products. Again it strikes me, at least, that this should not happen in the world that the objectivists see. They see the world as full of greedy charlatans who use advertising to sell expensive items which are no better than much cheaper ones.

Since my occupation is as a professor and scientist, some among the advocates of double blind might question my commitment to science. My experience with same/different double blind experiments suggest to me a flawed methodology. A double blind multiple component design, especially with a hypothesis that some people are better able to hear a difference, would be more pleasing to me, but even here, I do not think anyone would buy on the basis of such experiments.

To use Atkinson’s phrase, I am generally happy and don’t care if the objectivists think I am right. I suspect they have to have all of us say they are right before they can be happy. Well tough luck, guys. I cannot imagine anything more boring than consistent findings of no difference among wires and components, when I know that to be untrue. Oh, and I have ordered additional Intelligent Chips. My, I am a delusional fool!
tbg
With apologies to Shakespeare and all logicians:
"To DBT or not to DBT is or is not the question."

Hi Pabelson,

Your quote points us to the central point of this discussion:
"Yes, beauty can grow on you. But notice that it's not the lady who's changing. It's you. What does that tell us about long-term comparisons?"

It tells us what neuroscience has discovered. The brain is much more plastic than once believed. It is not static like electronic circuits. The brain circuitry and its chemistry change. New interneuronal connections are formed and concentrations of neurotransmitters and other brain chemicals change. So, what the brain could not distinguish one day, it may LEARN to distinguish in subsequent exposures to the experience. We have experienced this learning phenomenon as students, as professors, and as audiophiles. This is part of our growth and evolution. A double-blind test based on short-term listening sessions may not allow enough time for the brain circuitry and chemistry to reconfigure itself to discern the difference. Therefore, if a short-term double-blind test does not show a difference between two amps, it would not be correct to conclude that there was no difference between the amps, only that that particular test did not reveal a statistically significant difference. A double-blind test showing a positive difference may be useful for audiophiles, while the test showing no difference is an inconclusive statement about the amps.

Incorrect interpretations can also be made for long-term double-blind tests. History of science shows us that even the hard sciences like physics are not immune from making incorrect interpretations. A committment to truth and critical thinking helps purify science to better the human condition. Otherwise, our implicit assumptions may yield tautological statements similar to the very first statement in this post. Although it is logically valid, it does not contain useful information for the audiophiles.

Best Regards,
John
my new policy on audiogon is to post my opinion and let it stand on it's own merit. I no longer feel the need to respond to every competing opinion. I'll let the readers make thier own conclusion. I do however reserve the right to respond crticism directed at me.

My approval of DBT is in no way an endorsement of the ABX test.

Just because you beleive in DBT or ABX testing does not make you an objectiive. DBT proponents have yet to show me where they have used it to advance the state of the art. They are as biased as anyone else. In fact the inventer of the ABX gave this as his reason for inventing the ABX box-he was upset that audio companies could be destroyed by audio reviewers who did not know what they are talking about. Thus DBT/ABX was invented to attack the integrity of audio reviewers. Not as an objective scientific tool.

The intial tests where short term on inexperienced listeners. That is a fact. To furhter demonstrate thier lack of objectivity it is the proponents of DBT/ABX, when confronted with the fact that reviewers like Michael Fremer were in fact able to match A&B to X, attacked the validity of thier own test. In effect they concluded that because they knew there was no difference between amps he must be using some trick formulated by his knowledge of the amplifiers under test.
No one argues that amps and cables sound the same. Nothing is further from the truth. That is exactly what they argue calling it snake oil and making vial insults to those who design, sell, buy and review it.

Feel free to remain wedded to frequency response, distortion figures and output impedance if you like. You don't need a blind test for that because it is so easily measured. You may clean your palette with the occaisional blind test. Ultimately you are going to have to listen. This is what all the manufactures of good equipment do.
Let's not be so selective about what neuroscience has been discovered, John. It, along with psychoacoustics, has indeed discovered that it can take time to learn the sound of something, and the difference in sound between things. But they've also discovered that, once you've learned those differences, the best way to confirm that those differences are really there is through short-term listening tests that allow you to switch quickly between the two components. So why is it that a reviewer, who supposedly spends weeks "getting to know" a component, and who also owns a reference component which he also knows well, can't hear a difference between the two in such a test?

My point about how we change was aimed at the reviewer who reports differences between the component under review and something else he may have heard months before, but doesn't have now. He's claiming to do something that your neuroscientist/psychoacoustician has found to be impossible.
Perhaps because the neuroscientist/psychoacousticians don't intend their testing to deal with what is most accurattely replicates music as the experimental context necessitates tight and brief controls.
Hi Pabelson,

"But they've also discovered that, once you've learned those differences, the best way to confirm that those differences are really there is through short-term listening tests that allow you to switch quickly between the two components."

Any neuroscientist who would claim he/she discovered "the best way" to confirm differences would not be very credible with me on at least two counts. First, it is the "best" amongst which collection of methods? Have ALL POSSIBLE methods been tested? Perhaps some heretofore untested method could be even better. So, the scientist overstated the result. Although such hyping occurs, it is hardly scientific. It would also lead me to question if the scientist's methods also lacked precision and other high scientific standards.

Second, to determine that this method is the "best", it must be different from the rest. But how can the neuroscientist determine this difference? By DBT, the "best" method that determines differences??? But then the neuroscientist will be using the very method he/she is attempting to validate. In other words, the neuroscientist would hang himself/herself in a logical loop of circular reasoning.

Your statement appears to be based, at least in part, on faith in neuroscience and psycho-acoustics. These are important sciences but they are not hard sciences like physics and chemistry. Compared to physics, they are sciences in infancy. Their levels of rigor, accuracy, predictability, and reliability are not yet in the same league as those for physics and chemistry. So, my level of confidence in them is not as great as what yours appears to be in your posts. It's the complexity.

The complex substratum involved in auditory perception is not yet sufficiently understood to shed light on the finer aspects. A large number of neurons form millions of possible pathways that a particular "encoded song" can travel in our brains to yield the perception of its sound and our reaction to it. The same song or piece of music produced by the same audio system a few moments later may not travel the exact same pathways in our brain and hence may produce a different experience. This variability is compounded by the non-constant chemical environment that influences our experience. (For example, the amount of endorphins available at any one time.) Emotional changes, expectations, suggestions, levels of alertness, fleeting nature of memory, etc. add to the variability. Also, the brain circuitry is not as rigidly set as it once was thought to be. It can change with experience and learning. At the current state of neuroscience, there is insufficient organization, understanding and integration of this variabile milieu to shed light on the finer issues about DBT. That may be reason enough for some opponents of DBT to claim that "to DBT or not to DBT" is an irrelevant question. I, for one, am in favor of rigorous DBT and would find the positive results useful but the negative results inconclusive for reasons given in my previous post.

Best Regards,
John