Some might argue that, if a specific listener claims to expect a difference between, say, a hi-res and lo-res signal, that an ABX test with him is "testing the listener." But that’s mistaken. Such a test could only reveal whether that listener could distinguish a difference under the conditions of the test. Again, this why is why multiple tests yield more useful information.
Meaning what exactly? When someone here says DAC A sounds great and DAC B sounds like crap, how is that not a claim made under his test conditions? Heck, you don't even know his test conditions. At least with ABX tests, we have a protocol and way of documenting the results as I have been showing.
If you are saying someone can create a test where you can't tell the difference even if an audible difference exists, that is a truism. This is why we have specification such as ITU BS1116 on what a proper test is.
The issue is that audiophiles as a whole are terrible as a group in detecting small differences. This is why @soundfield is so confident that anyone saying or even showing the result of passing such tests must be lying or cheating.
As I have explained, we have a responsibility to create a proper test and give listeners every chance to pass a test, not work hard to make sure they don't. Before you say ABX tests make it hard, well, I am showing you that I can pass them. So that is not a valid excuse if you are really hearing what you are claiming.
Really, audiophiles routinely claim that making a tweak to their system makes a night and day difference. So much so that the wife in the kitchen hears it as well. If so, it should be walk in the part to pass the same in ABX test. If you can't with identical stimulus do that, then you need to learn why your sighted test was faulty. Don't go looking for problems in such a blind test.