Amir and Blind Testing


Let me start by saying I like watching Amir from ASR, so please let’s not get harsh or the thread will be deleted. Many times, Amir has noted that when we’re inserting a new component in our system, our brains go into (to paraphrase) “analytical mode” and we start hearing imaginary improvements. He has reiterated this many times, saying that when he switched to an expensive cable he heard improvements, but when he switched back to the cheap one, he also heard improvements because the brain switches from “music enjoyment mode” to “analytical mode.” Following this logic, which I agree with, wouldn’t blind testing, or any A/B testing be compromised because our brains are always in analytical mode and therefore feeding us inaccurate data? Seems to me you need to relax for a few hours at least and listen to a variety of music before your brain can accurately assess whether something is an actual improvement.  Perhaps A/B testing is a strawman argument, because the human brain is not a spectrum analyzer.  We are too affected by our biases to come up with any valid data.  Maybe. 

chayro

@djones51 Mine flanks the TV as I don’t really have anywhere else to put the speakers.

I didn’t say, but same here, 2ch stereo HT. I’m comforted to know that it has no measurable effect. (On a 6 foot long table from a school chemistry classroom. Its tall and the correct depth - I don’t like this kneeling/squatting business to push buttons, do cables etc. Blah)

@chayro I guess I was talking solid state amps. I don’t know if tube amps ever pretended to have minuscule distortion measurements that the Japanese gear you mention were achieving.

I wonder if the very low distortion amps would stand up to scrutiny on Amir’s bench with all the extra parameters we now know are important to sound quality..

I think this is now the 3rd time I’ve posted a link to this topic over the past 2 years concerning ASR so here it is again.

 

ASR is pretty used to empty responses like that one. It basically says "I don’t actually have any good, civil arguments or evidence in response to ASR’s reviews...but since I still don’t like their conclusions...here’s a disparaging meme so I can feel like I got one over on them."

 

Embarrassing enough once. But..3 times?

 

 

milpai,

 

I think he is measuring the wrong thing. Ask your guru to show how he measures a person's emotions. Some folks prefer precision while some prefer musicality.

 

That's missing the point.  In the case of, say, the Nordost USB cable or PS Audio P12 review, Amirm's measurments indicated no change to the signal that would be audible at all.   Measuring someone's "emotional response" tells you nothing about what's actually happening in reality, in terms of the gear.  I don't give a damn if you have an "emotional response" to a Nordost USB cable because that's you (and likely your imagination).  I want to know if it ACTUALLY does something for the signal so I know what I'm spending my money on.

 

Some approach high end audio like they do a religion.  Not everyone wants to do that; many of us want actual knowledge as to how the equipment works so we are making informed decisions with our money.

 

 

The inherent flaw of any sort of blindfolded A/B testing is when there's unfamiliarity of the test subjects with the items being tested. For example - a blind taste test of Coke vs Pepsi is adversely affected if the test takers aren't cola drinkers. Without a recognizable frame of reference, "best" or "better" is merely a guessing game.

So, playing a piece of music the subjects don't know on a system that is not like their own and asking them to compare that sample to a slightly changed subsequent sample is a waste of time, not a universal truth. Most of us have several pieces of music/performances/albums that we know intimately. If the benchmark used is one of those on our systems (or an equivalent one), then comparative testing has validity, but only then.

Great post!

I concur...

But "Objective measuring tool fetichists" , not Amirm perhaps, but his less enlightened disciples, will claim that sound experience, contrary to any psycho-acoustic/ physical acoustic science fields experience, will come directly and is DECIDED by and from the measured gear specs , not Amirm who is intelligent enough to give only his personal measured numbers, and will SUGGEST that his measures numbers had this meaning or this other one ...But for his disciples this suggestion is a defintive dogma... No listening experiments can contradict it with any value of any kind...Only blind test will defeat SUBJECTIVE biases....And they need to defeat it... But we cannot optimally  tune a SMALL room for ourself  WITHOUT our learned subjective  biases  with  only objective physical acoustic principles ... 😁😊

People are gullible, be it  techno fad "alleged" scientist or those other type of " fetichist who taste their brand name gear" in itself for itself without any objective context to put them at test....The more important context is a room acoustic controlled or not ...

Audio for me is investigation by listening experiments of the acoustic/psycho-acoustic dimension... It is not about a McIntosh or Schiit or Mephisto amplifier specs or price... For sure all piece of gear are different by their design but what we can do to put them at an audiophile level of optimal working ? This is the question...

There is a good amplifier at any price tag....New, old or vintage anyway...

Deception for me in audio is the ignorance of acoustic importance...Bad design will exist even after Amirm measured testing tool bench test , and sometimes good design will exist in spite of his critic with his measuring tool... And anyway what is good in some room may be bad in another room SOMETIMES...

Reality dont emerge from a simplistic formula....

There exist truly bad design but no measures specs are necessary most of the times, a little listening will do...

 

The inherent flaw of any sort of blindfolded A/B testing is when there’s unfamiliarity of the test subjects with the items being tested. For example - a blind taste test of Coke vs Pepsi is adversely affected if the test takers aren’t cola drinkers. Without a recognizable frame of reference, "best" or "better" is merely a guessing game.

So, playing a piece of music the subjects don’t know on a system that is not like their own and asking them to compare that sample to a slightly changed subsequent sample is a waste of time, not a universal truth. Most of us have several pieces of music/performances/albums that we know intimately. If the benchmark used is one of those on our systems (or an equivalent one), then comparative testing has validity, but only then.