Tbg: I think you misunderstand my little analogy about medical trials. My point isn't that placebos can't have any effects, good or bad, that suggestion primes subjects to experience. My point is that only a real medicine can have unintended real effects, and that in fact this can be observed to almost univerally be the case when transposed to the audio realm. All audio devices or treatments having some plausible method of action are obviously intended and claimed to make the sound 'better' in some way(s), but will actually be reported by various users to show effects other than those which the manufacturer intended or claims. The trend we observe with the CLC so far doesn't seem to conform to this pervasively common and expected pattern, and we can reasonably surmise why.
I don't know why you bring up double-blind testing (DBT), since I never did, but just for the record: I agree that blind testing in audio can be of limited value, even misleading under certain circumstances, though I'm not against it so long as its limitations are acknowledged (which its most zealous proponents typically fail to do). In particular I feel that the ABX methodology (whether single-blind or double-blind) has the strong potential to actually be obscuring, rather than enlightening as presumed, in attempting to establish reliable minimum thresholds for the perceiving of subtle sonic differences, often tending to underreport the existence or significance of such differences. Blind testing is however the only way to rule out the placebo effect, which very often biases sighted tests in the other direction, so it certainly has its place in the scientific sense, no matter how procedurally less than pleasant or logistically troublesome it may be to really get done for the casual audiophile at home.
Personally though, my own opinion is that there's a slightly different way in which sighted testing tends to corrupt results in audiophile trials, at least as much or even more so than simply causing differences to be reported where perhaps there really aren't any. (And here I'm talking about products that could conceivably cause any differences without resort to invoking magic -- not the CLC.) This is when differences are reasonably and honestly heard and can be repeatably identified to a good degree of certainty, but the characterization of whether those differences are on the whole good or bad, and how significant they add up to be, can become unduly influenced by our preconceived notions about the products we are testing.
For instance, when a less-expensive (or less-'prestigious') component is tested against a more-expensive (or more-'prestigious') component, real enough differences may be heard -- independent of any placebo effect, since after all the two components really are different -- but which component we assume the difference is in favor of, I believe often gets undesirably affected by our quite undertandable (though possibly unstated, or even subconscious) preconception that the more expensive/prestigious component 'must' be the more 'correct' or 'better' sounding of the two, where we may not possess as good an intrinsic idea of how 'correct' or 'better' might actually sound as we'd like to believe.
I suspect this phenomenon is common to the point of being the norm, and tough to avoid in evaluating gear no matter how honest we're trying to be with ourselves. Even vast experience could sometimes serve to reinforce the foible rather than counteract it. Fortunately though, since the audio game is largely about subjectively pleasing oneself -- and besides which we can never achieve anything close to total 'correctness' -- objectively 'scientific' accuracy in these assessments ain't necessarily the factor to value most highly, though I myself am no way in favor of simply disregarding it, as best as it can be determined.
I don't know why you bring up double-blind testing (DBT), since I never did, but just for the record: I agree that blind testing in audio can be of limited value, even misleading under certain circumstances, though I'm not against it so long as its limitations are acknowledged (which its most zealous proponents typically fail to do). In particular I feel that the ABX methodology (whether single-blind or double-blind) has the strong potential to actually be obscuring, rather than enlightening as presumed, in attempting to establish reliable minimum thresholds for the perceiving of subtle sonic differences, often tending to underreport the existence or significance of such differences. Blind testing is however the only way to rule out the placebo effect, which very often biases sighted tests in the other direction, so it certainly has its place in the scientific sense, no matter how procedurally less than pleasant or logistically troublesome it may be to really get done for the casual audiophile at home.
Personally though, my own opinion is that there's a slightly different way in which sighted testing tends to corrupt results in audiophile trials, at least as much or even more so than simply causing differences to be reported where perhaps there really aren't any. (And here I'm talking about products that could conceivably cause any differences without resort to invoking magic -- not the CLC.) This is when differences are reasonably and honestly heard and can be repeatably identified to a good degree of certainty, but the characterization of whether those differences are on the whole good or bad, and how significant they add up to be, can become unduly influenced by our preconceived notions about the products we are testing.
For instance, when a less-expensive (or less-'prestigious') component is tested against a more-expensive (or more-'prestigious') component, real enough differences may be heard -- independent of any placebo effect, since after all the two components really are different -- but which component we assume the difference is in favor of, I believe often gets undesirably affected by our quite undertandable (though possibly unstated, or even subconscious) preconception that the more expensive/prestigious component 'must' be the more 'correct' or 'better' sounding of the two, where we may not possess as good an intrinsic idea of how 'correct' or 'better' might actually sound as we'd like to believe.
I suspect this phenomenon is common to the point of being the norm, and tough to avoid in evaluating gear no matter how honest we're trying to be with ourselves. Even vast experience could sometimes serve to reinforce the foible rather than counteract it. Fortunately though, since the audio game is largely about subjectively pleasing oneself -- and besides which we can never achieve anything close to total 'correctness' -- objectively 'scientific' accuracy in these assessments ain't necessarily the factor to value most highly, though I myself am no way in favor of simply disregarding it, as best as it can be determined.