Did Amir Change Your Mind About Anything?


It’s easy to make snide remarks like “yes- I do the opposite of what he says.”  And in some respects I agree, but if you do that, this is just going to be taken down. So I’m asking a serious question. Has ASR actually changed your opinion on anything?  For me, I would say 2 things. I am a conservatory-trained musician and I do trust my ears. But ASR has reminded me to double check my opinions on a piece of gear to make sure I’m not imagining improvements. Not to get into double blind testing, but just to keep in mind that the brain can be fooled and make doubly sure that I’m hearing what I think I’m hearing. The second is power conditioning. I went from an expensive box back to my wiremold and I really don’t think I can hear a difference. I think that now that I understand the engineering behind AC use in an audio component, I am not convinced that power conditioning affects the component output. I think. 
So please resist the urge to pile on. I think this could be a worthwhile discussion if that’s possible anymore. I hope it is. 

chayro

Only if you promise to be one of the everyone.

Why?  Set up the test.  Show the people here that they can't tell the difference between high res and CD as you like to claim.

This shows a complete misunderstanding as to the nature of double-blind testing in audio, such as ABX testing. Such tests are not designed to test the listener - that’s the role of an audiologist. The listener isn’t under test at all. What’s being tested is whether two signals can be distinguished under the conditions of the test. That’s why the best blind test programs include multiple listeners and multiple trials.

What the audiologist does is exactly that: whether a signal can be detected under the conditions of the test.  They even play noise and then a tone to see if you can hear one over the other.  Seems like you have neither taken an audiologist test, nor an ABX.

As to multiple trials, that is exactly what I showed.  Each row represents a randomization of the samples and you are asked the question again:

Difference between 24/96 kHz and 16/44.1 with file provided by the late ArnyK:
foo_abx 1.3.4 report
foobar2000 v1.3.2
2014/07/24 20:27:41

File A: C:\Users\Amir\Music\Arnys Filter Test\keys jangling amir-converted 4416 2496.wav
File B: C:\Users\Amir\Music\Arnys Filter Test\keys jangling full band 2496.wav

20:27:41 : Test started.
20:28:07 : 00/01 100.0%
20:28:25 : 00/02 100.0%
20:28:55 : 01/03 87.5%
20:29:02 : 02/04 68.8%
20:29:12 : 03/05 50.0%
20:29:20 : 04/06 34.4%
20:29:27 : 05/07 22.7%
20:29:36 : 06/08 14.5%
20:29:44 : 07/09 9.0%
20:29:55 : 08/10 5.5%
20:30:00 : 09/11 3.3%
20:30:07 : 10/12 1.9%
20:30:16 : 11/13 1.1%
20:30:22 : 12/14 0.6%
20:30:29 : 13/15 0.4%
20:30:36 : 14/16 0.2%
20:30:41 : 15/17 0.1%
20:30:53 : 16/18 0.1%
20:31:03 : 17/19 0.0%
20:31:07 : Test finished.

----------
Total: 17/19 (0.0%)

0.0% probably of chance.

Above, the test was repeated 19 times and I got 17 right making the probability that I was guessing less than 0.0%.

As to multiple listeners, that is if we want to establish detection thresholds for a population.  In the case of a personal challenge, if  you pass a test like above, it is a significant factor that calls for standing up and paying attention.  This is orthogonal to what an ABX test is.

So no, there is no confusion here.  @kevn said he passed the test of high-res vs CD but provided no evidence whatsoever.  And the test that he said he did run, is not about high-res vs CD.  For my part, I took whatever challenges were common at the time and ran the in a proper program to see if I could tell the difference.  

Have you taken an ABX test and if so, can you post the outcome of any?

Some might argue that, if a specific listener claims to expect a difference between, say, a hi-res and lo-res signal, that an ABX test with him is "testing the listener." But that’s mistaken. Such a test could only reveal whether that listener could distinguish a difference under the conditions of the test. Again, this why is why multiple tests yield more useful information.

Meaning what exactly?  When someone here says DAC A sounds great and DAC B sounds like crap, how is that not a claim made under his test conditions?  Heck, you don't even know his test conditions.  At least with ABX tests, we have a protocol and way of documenting the results as I have been showing.

If you are saying someone can create a test where you can't tell the difference even if an audible difference exists, that is a truism.  This is why we have specification such as ITU BS1116 on what a proper test is. 

The issue is that audiophiles as a whole are terrible as a group in detecting small differences.  This is why @soundfield is so confident that anyone saying or even showing the result of passing such tests must be lying or cheating. 

As I have explained, we have a responsibility to create a proper test and give listeners every chance to pass a test, not work hard to make sure they don't.  Before you say ABX tests make it hard, well, I am showing you that I can pass them.  So that is not a valid excuse if you are really hearing what you are claiming.

Really, audiophiles routinely claim that making a tweak to their system makes a night and day difference.  So much so that the wife in the kitchen hears it as well.  If so, it should be walk in the part to pass the same in ABX test.  If you can't with identical stimulus do that, then you need to learn why your sighted test was faulty.  Don't go looking for problems in such a blind test.

As an aside, conducting a proper audio double-blind test is tricky business. I’ve seen it done and it’s not as easy as it looks. When they’re well conducted, I’ve found that many differences become harder to distinguish than might be expected. When they are improperly conducted, such a test has no advantage over a sighted test and can yield misleading results.

This is a bunch of nebulous claims. I don’t know what you have seen. What was hard about it. Or how it generated worst results than sighted.

Such claims have been examined. For example audiophiles claim they need long term testing vs short. Clark led such a study for his local audiophile group by creating a black box that generated X amount of distortion. Audiophiles took these home but could not hear the distortion. Yet, another group with an ABX box and quick switching, not only detected that difference but eve a lower one! See my digest of that paper here.

AES Paper Digest: Sensitivity and Reliability of ABX Blind Testing

The second of the tests consisted of ten battery powered black boxes, five of which had the distortion circuit and five of which did not. The sealed boxes appeared identical and were built to golden ear standards with gold connectors, silver solder and buss-bar bypass wiring. Precautions were taken to prevent accidental or casual identification of the distortion by using the on/off switch or by letting the battery run down. The boxes were handed out in a double-blind manner to at least 16 members of each group with instructions to patch them into the tape loop of their home preamplifier for as long as they needed to decide whether the box was neutral or not. This was an attempt to duplicate the long-term listening evaluation favored by golden ears.

This was the outcome:

The results were that the Long Island group [Audiophile/Take Home Group] was unable to identify the distortion in either of their tests. SMWTMS’s listeners also failed the "take home" test scoring 11 correct out of 18 which fails to be significant at the 5% confidence level. However, using the A/B/X test, the SMWTMS not only proved audibility of the distortion within 45 minutes, but they went on to correctly identify a lower amount. The A/B/X test was proven to be more sensitive than long-term listening for this task.

See how I provide specifics to back what I say? Why do you think mere claims should be sufficient otherwise?

@soundfield 

Umm, over your right shoulder, in background

I see where you got confused.  Almost all of the ASR video content has the analyzer in the background.  None of these tests were run during that video.  Every test I have been showing predate my youtube channel by 5 or more years (see the dates in ABX tests and the ones for videos).  In the video, I am just showing the results, not running them then.  This should have been quite obvious.

As such, your claim that I had an analyzer running at the same time of the ABX testing is totally false.