Clever Little Clock - high-end audio insanity?


Guys, seriously, can someone please explain to me how the Clever Little Clock (http://www.machinadynamica.com/machina41.htm) actually imporves the sound inside the litening room?
audioari1
I have found that even loose testing conditions minimize my ability to hear differences that I previously thought obvious.

Recently, I did some blind testing with a neighbor who has a Krell integrated amp. It drives me nuts, so we dropped in a Modwright 9.0 SE, using the Krell as amp-only. We were both immediately floored. The sound, to me, was soooo much better. He started talking about buying one.

Then, I left it with him for a couple weeks. He did many A/B tests and determined the differences were extremely minor. He blind-tested me and I was fairly ineffective in picking which arrangement was working. He decided the Modwright didn't improve his system and wasn't worth the money.

What does this indicate? Well, my visits to his sound room are again rife with dissatisfaction. The etchy glare is back and I don't really like going over there to listen. Yet, the tests failed to show differences that were obvious in a stress-free environment.

I think this is where testing falls down. How does one know when stress is influencing perception? Further, who wants to subject themselves to testing? It is diametrically opposed to what we normally use our systems for - relaxation and experience.

The idea of a large-sample test does sound promising, and a positive result would be hard to refute. But, it would be nearly impossible to achieve and I'd be suspect of any determination of negativity.

Yeah yeah, making excuses when there isn't even a result yet. . .
Miklorsmith: I agree (and have detailed before) that there can be problems with formal testing methodologies as applied to subjective auditioning. I do think some kinds of testing can introduce a "confusion factor" that may actually serve to artificially raise the floor for perceivability of low-level differences. And I think it's to a large extent possible to ameliorate biasing effects due to external factors without resorting to blind tests, though it can take repetition over time and a certain self-questioning mindset (that I'm learning a lot of audiophiles seem to lack). As for how test conditions might significantly differ from normal use conditions, this can be good or bad -- I don't listen to music for enjoyment by performing rapid A/B comparisons, but doing them can really help nail down (or dismiss) some elusive observations concerning gear.

But, when faced with a product or claim that appears to carry all the hallmarks of snakeoil, and audiophiles buying into it using the most casual and fallible auditioning methods, I don't think it's inappropriate to call for some demonstrable degree of rigor to be brought to bear. I also think that experiences like the one you relate above are valuable for putting things in proper perspective every once in a while.
I think that the real telling statistic would be the actual percentage of purchasers who took advantage of the 30 day money back garantee....
Zaikesman, I think that Miklorsmith made a very valid point. In fact, I was part of an even more baffling experiment. This was a system with VTL monoblocks, a VTL Reference preamp, and Wilson speakers. Everyone was floored by how amazing the VTL pre-amp sounded, I had to agree, it sounded pretty damn good. Then someone in the group made a comment that listeners would not be able to detect in a blind listening test when the VTL was in our or out of the system.

So we set up an experiment substituting the VTL preamp with a $200 NAD preamp while people listened blindfolded. Not a SINGLE person in the group was able to consistently tell which was the VTL or the NAD. One is around $10,000 and the other is $200.

This brings up the inevitable conclusion that listening evaluation is probably extremely flawed in these types of tests no matter how you slice it or dice it.

I think the more proper way to evaluate equipment is to put in a component and the listen to it for several days and then make the switch. For some reason, rapid A/B switching doesn't allow the brain to make the adjustments quickly enough.

Otherwise, how would you explain these results?
Audioari1

This brings up the inevitable conclusion that listening evaluation is probably extremely flawed in these types of tests no matter how you slice it or dice it.

I think the more proper way to evaluate equipment is to put in a component and the listen to it for several days and then make the switch. For some reason, rapid A/B switching doesn't allow the brain to make the adjustments quickly enough.

Otherwise, how would you explain these results?

Boy do I ever agree with that!!

Long term listening is the only way to achieve satisfaction with one's system. Quick switching is not how we enjoy music and certainly not how to decide on the equipment either.

This does not imply my acceptance or denial of the ability of the Clever Little Clock. At this point I cannot tell who is in favor of it and who is having fun in this thread.