Pabelson, interesting challenge, but let’s look at what you’ve said in your various posts in this thread. I’ve pasted them without dates, but I’m sure that you know what you’ve said so far.
"What advances the field is producing your own evidence—evidence that meets the test of reliability and repeatability, something a sighted listening comparison can never do. That’s why objectivists are always asking, Where’s your evidence?"
"A good example of a mix of positive and negative tests is the ABX cable tests that Stereo Review did more than 20 years ago. Of the 6 comparisons they did, 5 had positive results; only 1 was negative."
"It's better to use one subject at a time, and to let the subject control the switching."
"Many objectivists used to be subjectivists till they started looking into things, and perhaps did some testing of their own."
You cite the ABX home page, a page that shows that differences can be heard. Yet I recognize that the differences when heard were between components that were quite different and usually meeting the standard you’ve indicated as much better specs will sound better.
Once you decide something does sound different, is this what you buy? Is different better? You say:
"Find ANYBODY who can tell two amps apart 15 times out of 20 in a blind test (same-different, ABX, whatever), and I’ll agree that those two amps are sonically distinguishable."
Does that make you want to have this amp? Is that your standard?
One of the tests you cite was in 1998 with two systems that were quite different in more than price. Does that lend credence to the DBT argument? On the one hand you point to all the same but one component with one listener with repeated tests but then cite something quite different to impugn subjectivists – not that it’s all that hard to do. You also cite a number of times that DBT has indicated that there is a difference. Which is it? Is there “proof” of hearing differences that has been established by DBT? It certainly appears that there is from the stuff you have cited. By your argument, if this has been done once, the subjectivists have demonstrated their point. I don’t agree, and you really don't appear to , either.
My points were two, and I do not feel that they have been addressed by your challenge. One, that most DBT tests as done in audio have readily questionable methods – methods that invalidate any statistical testing, as well as sample sizes that are way too small for valid statistics. Those tests you cite in which differences were found do look valid, but I haven’t taken the time to go into them more deeply. Two, and the far more important point to me, do the DBT tests done or any that might be done really address the stuff of subjective reviews? I just don’t see how this can be done, and I’m not going to try to accept your challenge , “If you know so much ...” Instead, if you know so much about science and psychoacoustics, and you do appear to have at least a passing knowledge to me, why would you issue such a meaningless, conversation stopper challenge? Experiments with faulty experimental design are refused for journal or other publication all the time by reviewers who do not have to respond to such challenges. The flaws they point out are sufficient.
Finally, I’ve been involved in this more than long enough to have heard many costly systems in homes and showrooms that either sounded awful to my ears or were unacceptable to me one way or another. The best I’ve heard have never been the most costly but have consistently been in houses with carefully set up sound rooms built especially for that purpose from designs provided by psychoacoustic objectivists. This makes me suspect that what we have is far better than we know, a point inherent in many "objectivist" arguments. My home does not even come close to that standard in my listening room (and a very substantial majority of pictures I see of various systems in rooms around the net also seem to fall pretty short). The DBT test setups I have seen have never been in that type of room, either. What effect this would have on a methodologically sound DBT would be interesting. Wouldn’t it?
"What advances the field is producing your own evidence—evidence that meets the test of reliability and repeatability, something a sighted listening comparison can never do. That’s why objectivists are always asking, Where’s your evidence?"
"A good example of a mix of positive and negative tests is the ABX cable tests that Stereo Review did more than 20 years ago. Of the 6 comparisons they did, 5 had positive results; only 1 was negative."
"It's better to use one subject at a time, and to let the subject control the switching."
"Many objectivists used to be subjectivists till they started looking into things, and perhaps did some testing of their own."
You cite the ABX home page, a page that shows that differences can be heard. Yet I recognize that the differences when heard were between components that were quite different and usually meeting the standard you’ve indicated as much better specs will sound better.
Once you decide something does sound different, is this what you buy? Is different better? You say:
"Find ANYBODY who can tell two amps apart 15 times out of 20 in a blind test (same-different, ABX, whatever), and I’ll agree that those two amps are sonically distinguishable."
Does that make you want to have this amp? Is that your standard?
One of the tests you cite was in 1998 with two systems that were quite different in more than price. Does that lend credence to the DBT argument? On the one hand you point to all the same but one component with one listener with repeated tests but then cite something quite different to impugn subjectivists – not that it’s all that hard to do. You also cite a number of times that DBT has indicated that there is a difference. Which is it? Is there “proof” of hearing differences that has been established by DBT? It certainly appears that there is from the stuff you have cited. By your argument, if this has been done once, the subjectivists have demonstrated their point. I don’t agree, and you really don't appear to , either.
My points were two, and I do not feel that they have been addressed by your challenge. One, that most DBT tests as done in audio have readily questionable methods – methods that invalidate any statistical testing, as well as sample sizes that are way too small for valid statistics. Those tests you cite in which differences were found do look valid, but I haven’t taken the time to go into them more deeply. Two, and the far more important point to me, do the DBT tests done or any that might be done really address the stuff of subjective reviews? I just don’t see how this can be done, and I’m not going to try to accept your challenge , “If you know so much ...” Instead, if you know so much about science and psychoacoustics, and you do appear to have at least a passing knowledge to me, why would you issue such a meaningless, conversation stopper challenge? Experiments with faulty experimental design are refused for journal or other publication all the time by reviewers who do not have to respond to such challenges. The flaws they point out are sufficient.
Finally, I’ve been involved in this more than long enough to have heard many costly systems in homes and showrooms that either sounded awful to my ears or were unacceptable to me one way or another. The best I’ve heard have never been the most costly but have consistently been in houses with carefully set up sound rooms built especially for that purpose from designs provided by psychoacoustic objectivists. This makes me suspect that what we have is far better than we know, a point inherent in many "objectivist" arguments. My home does not even come close to that standard in my listening room (and a very substantial majority of pictures I see of various systems in rooms around the net also seem to fall pretty short). The DBT test setups I have seen have never been in that type of room, either. What effect this would have on a methodologically sound DBT would be interesting. Wouldn’t it?