Clever Little Clock - high-end audio insanity?


Guys, seriously, can someone please explain to me how the Clever Little Clock (http://www.machinadynamica.com/machina41.htm) actually imporves the sound inside the litening room?
audioari1
Tonnesen: The results of the fairly unscientific blind test we did with the NJ Audio Society are referenced earlier in one of my posts on this thread (you have to go to another of the many threads on this topic to find it). Take a look at the results and do with them what you'd like. Zaikes, not to take away from your post, but the one person in the test who correctly identified when the clock was in or out of the system felt the clock's effect was detrimental to the sound. Since we're having another meeting at my place on this Sunday, and I still haven't done anything with the clock, I might try it on them, unsuspecting, again, to see if anyone notices anything. Me, so far I haven't heard a difference, and I haven't experienced any time travel effects except that my hair is growing back.
Zaikesman, the Clever Little Clock has only been on the market for a few short months. The nocebo effect requires a subject that exhibits a negative view toward the test object.
Tbg: I think you misunderstand my little analogy about medical trials. My point isn't that placebos can't have any effects, good or bad, that suggestion primes subjects to experience. My point is that only a real medicine can have unintended real effects, and that in fact this can be observed to almost univerally be the case when transposed to the audio realm. All audio devices or treatments having some plausible method of action are obviously intended and claimed to make the sound 'better' in some way(s), but will actually be reported by various users to show effects other than those which the manufacturer intended or claims. The trend we observe with the CLC so far doesn't seem to conform to this pervasively common and expected pattern, and we can reasonably surmise why.

I don't know why you bring up double-blind testing (DBT), since I never did, but just for the record: I agree that blind testing in audio can be of limited value, even misleading under certain circumstances, though I'm not against it so long as its limitations are acknowledged (which its most zealous proponents typically fail to do). In particular I feel that the ABX methodology (whether single-blind or double-blind) has the strong potential to actually be obscuring, rather than enlightening as presumed, in attempting to establish reliable minimum thresholds for the perceiving of subtle sonic differences, often tending to underreport the existence or significance of such differences. Blind testing is however the only way to rule out the placebo effect, which very often biases sighted tests in the other direction, so it certainly has its place in the scientific sense, no matter how procedurally less than pleasant or logistically troublesome it may be to really get done for the casual audiophile at home.

Personally though, my own opinion is that there's a slightly different way in which sighted testing tends to corrupt results in audiophile trials, at least as much or even more so than simply causing differences to be reported where perhaps there really aren't any. (And here I'm talking about products that could conceivably cause any differences without resort to invoking magic -- not the CLC.) This is when differences are reasonably and honestly heard and can be repeatably identified to a good degree of certainty, but the characterization of whether those differences are on the whole good or bad, and how significant they add up to be, can become unduly influenced by our preconceived notions about the products we are testing.

For instance, when a less-expensive (or less-'prestigious') component is tested against a more-expensive (or more-'prestigious') component, real enough differences may be heard -- independent of any placebo effect, since after all the two components really are different -- but which component we assume the difference is in favor of, I believe often gets undesirably affected by our quite undertandable (though possibly unstated, or even subconscious) preconception that the more expensive/prestigious component 'must' be the more 'correct' or 'better' sounding of the two, where we may not possess as good an intrinsic idea of how 'correct' or 'better' might actually sound as we'd like to believe.

I suspect this phenomenon is common to the point of being the norm, and tough to avoid in evaluating gear no matter how honest we're trying to be with ourselves. Even vast experience could sometimes serve to reinforce the foible rather than counteract it. Fortunately though, since the audio game is largely about subjectively pleasing oneself -- and besides which we can never achieve anything close to total 'correctness' -- objectively 'scientific' accuracy in these assessments ain't necessarily the factor to value most highly, though I myself am no way in favor of simply disregarding it, as best as it can be determined.
Russ: Thanks for reminding us of that fact, I must have forgotten it because I did read that thread several days ago. I think you are correct however in assuming that the anecdote doesn't really contradict my argument on this point, because the test subjects were informed what it was they were auditioning, and a combination of lucky guesses and a predisposition to 'dislike' the Clock for whatever reason could have resulted in the observed response from this individual, which of course must be taken as just a part of the much greater number of subjects who couldn't demonstrate that they heard anything and/or claimed they couldn't as well. I think longer-term trials by people actually laying down their own money for the Clock would be more reliable at indicating whether unintended real effects were a strong possibility, same as with any other gear, and so far I've seen none reported by that group.

BTW, I agree very much (in principle -- in reality, I can't say that I really care! :-) with your idea of doing the next trial without announcing what's being tested, or maybe even that there's a test taking place. Another good idea from my perspective would be to conduct that exact same test as you did the first time, except under false pretenses with no Clock actually present at all. Then I think the best wrap-up would be to do a fully-sighted test with the Clock being in play.