Did Amir Change Your Mind About Anything?


It’s easy to make snide remarks like “yes- I do the opposite of what he says.”  And in some respects I agree, but if you do that, this is just going to be taken down. So I’m asking a serious question. Has ASR actually changed your opinion on anything?  For me, I would say 2 things. I am a conservatory-trained musician and I do trust my ears. But ASR has reminded me to double check my opinions on a piece of gear to make sure I’m not imagining improvements. Not to get into double blind testing, but just to keep in mind that the brain can be fooled and make doubly sure that I’m hearing what I think I’m hearing. The second is power conditioning. I went from an expensive box back to my wiremold and I really don’t think I can hear a difference. I think that now that I understand the engineering behind AC use in an audio component, I am not convinced that power conditioning affects the component output. I think. 
So please resist the urge to pile on. I think this could be a worthwhile discussion if that’s possible anymore. I hope it is. 

chayro

Some might argue that, if a specific listener claims to expect a difference between, say, a hi-res and lo-res signal, that an ABX test with him is "testing the listener." But that’s mistaken. Such a test could only reveal whether that listener could distinguish a difference under the conditions of the test. Again, this why is why multiple tests yield more useful information.

Meaning what exactly?  When someone here says DAC A sounds great and DAC B sounds like crap, how is that not a claim made under his test conditions?  Heck, you don't even know his test conditions.  At least with ABX tests, we have a protocol and way of documenting the results as I have been showing.

If you are saying someone can create a test where you can't tell the difference even if an audible difference exists, that is a truism.  This is why we have specification such as ITU BS1116 on what a proper test is. 

The issue is that audiophiles as a whole are terrible as a group in detecting small differences.  This is why @soundfield is so confident that anyone saying or even showing the result of passing such tests must be lying or cheating. 

As I have explained, we have a responsibility to create a proper test and give listeners every chance to pass a test, not work hard to make sure they don't.  Before you say ABX tests make it hard, well, I am showing you that I can pass them.  So that is not a valid excuse if you are really hearing what you are claiming.

Really, audiophiles routinely claim that making a tweak to their system makes a night and day difference.  So much so that the wife in the kitchen hears it as well.  If so, it should be walk in the part to pass the same in ABX test.  If you can't with identical stimulus do that, then you need to learn why your sighted test was faulty.  Don't go looking for problems in such a blind test.

As an aside, conducting a proper audio double-blind test is tricky business. I’ve seen it done and it’s not as easy as it looks. When they’re well conducted, I’ve found that many differences become harder to distinguish than might be expected. When they are improperly conducted, such a test has no advantage over a sighted test and can yield misleading results.

This is a bunch of nebulous claims. I don’t know what you have seen. What was hard about it. Or how it generated worst results than sighted.

Such claims have been examined. For example audiophiles claim they need long term testing vs short. Clark led such a study for his local audiophile group by creating a black box that generated X amount of distortion. Audiophiles took these home but could not hear the distortion. Yet, another group with an ABX box and quick switching, not only detected that difference but eve a lower one! See my digest of that paper here.

AES Paper Digest: Sensitivity and Reliability of ABX Blind Testing

The second of the tests consisted of ten battery powered black boxes, five of which had the distortion circuit and five of which did not. The sealed boxes appeared identical and were built to golden ear standards with gold connectors, silver solder and buss-bar bypass wiring. Precautions were taken to prevent accidental or casual identification of the distortion by using the on/off switch or by letting the battery run down. The boxes were handed out in a double-blind manner to at least 16 members of each group with instructions to patch them into the tape loop of their home preamplifier for as long as they needed to decide whether the box was neutral or not. This was an attempt to duplicate the long-term listening evaluation favored by golden ears.

This was the outcome:

The results were that the Long Island group [Audiophile/Take Home Group] was unable to identify the distortion in either of their tests. SMWTMS’s listeners also failed the "take home" test scoring 11 correct out of 18 which fails to be significant at the 5% confidence level. However, using the A/B/X test, the SMWTMS not only proved audibility of the distortion within 45 minutes, but they went on to correctly identify a lower amount. The A/B/X test was proven to be more sensitive than long-term listening for this task.

See how I provide specifics to back what I say? Why do you think mere claims should be sufficient otherwise?

@soundfield 

Umm, over your right shoulder, in background

I see where you got confused.  Almost all of the ASR video content has the analyzer in the background.  None of these tests were run during that video.  Every test I have been showing predate my youtube channel by 5 or more years (see the dates in ABX tests and the ones for videos).  In the video, I am just showing the results, not running them then.  This should have been quite obvious.

As such, your claim that I had an analyzer running at the same time of the ABX testing is totally false.  

Hi Chayro,

Sensory Evaluation classes in the Wine Industry teach us that the olfactory sense of smell is interpreted; the only one of our senses that is not 'technically' hard-wired.

Some humans can be 'trained' to distinguish up to 1,000 different smells.

Each humans mouth, nose etc. are different.  For example when we would place and old 3-ring binder life-saver on our tongues and place a small drop of blue dye in the middle hole we could count the taste buds in the center of the life-saver-shaped hole.  Those who had lots of little taste buds were 'super tasters' and medium amounts 'tasters' and those with few big blotchy ones were called 'non-tasters'.  Each of them totally valid for the person whose tongue we were looking at.

We tried different taste sensation like bitterness from caffeine, or sweetness from sugar.  Each taste was sensed from a different area of our mouth.

The lesson we learned was we are all physiologically different.  What tastes good to you may not taste good to me; so make sure you put at least 3-different wines on the table to try and please everyone!

You can see where this is going, if you like a wine reviewers taste then you will like his wines, no matter how he measures his taste in the wine, you both have a similar set of physiological taste buds and olfactory sensory apparatus.

So it's not too hard to understand that audio senses are also interpreted to some degree based on lots of physical inputs and from most importantly life experiences.  We could never understand why the teachers promoted the old school European wines over the fruit forward California ones, until we had enough tastes under our belts to gain a base-line of understanding from which our sensory evaluation could take place.

Thus no matter how many types of audio equipment one may listen to or measure, if you don't have the same taste in sound as the reviewer then it matters not because like it or not sound is an interpreted experience.

Trust me we put super expensive, super highly revered wines next to those that were not, and it was always the same thing, 30% liked, 30% did not like, %40 didn't care that much.

If you put 30 people in a sound testing environment, good math and statistics will tell you the same spread will recur over and over, cost is irrelevant, and personal choice is all that matters.

So, find a reviewer that has your taste in sound and follow them.

 

Cheers Mate

Why?  Set up the test.  Show the people here that they can't tell the difference between high res and CD as you like to claim.

Umm, where did I claim that? Plus its a fools errand to seek negative proof, not mine. I'm far more interested in you demonstrating that you can, especially with someone else running the test. Sans any view of the signal analyzers of course 😉.

I see where you got confused.  Almost all of the ASR video content has the analyzer in the background.  None of these tests were run during that video.  Every test I have been showing predate my youtube channel by 5 or more years (see the dates in ABX tests and the ones for videos).  In the video, I am just showing the results, not running them then.  This should have been quite obvious.

Ok, so you confirm those are indeed signal analyzers, Oscilloscopes etc that could theoretically real time analyze and identify signals, visibly. Cool.

As such, your claim that I had an analyzer running at the same time of the ABX testing is totally false. 

Well, there is no way for us to know that definitively now, is there?

That's why you didn't grade your own Math tests in school (and score 100% all the time!). It's good to have independent oversight.