Again, I’m thankful to rodman99999 for providing the longer quotes from Feynman
which serve so well to support the point I’d been making (as well as Amir).
Let’s take this section:
FEYNMAN: It’s a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty—a kind of leaning over backwards. For example, if you’re doing an experiment, you should report everything that you think might make it invalid—not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you’ve eliminated by some other experiment, and how they worked—to make sure the other fellow can tell they have been eliminated.
Details that could throw doubt on your interpretation must be given, if you know them. You must do the best you can—if you know anything at all wrong, or possibly wrong—to explain it.
I think a nice example of how this can work is the infamous Opera Experiment that purported to detect faster-than-light neutrinos:
https://en.wikipedia.org/wiki/Faster-than-light_neutrino_anomaly
The team of physicists upon finding the anomoly in their results knew how momentous it would be, and so they checked and double checked their findilngs looking for any way things could have gone wrong. They re-ran the experiment, getting the same results, and when months of doing everything they could to find errors was finished, the announced the results. However, being good scientists they understood the extraordinary nature of the results and presented it to other scientists saying basically "Look, we got these unexpected results. We’ve done everything we can to trace possible biases, influences or technical issues in our experiment...but we are presenting the results so you can double check our work, and hopefully replicate the results."
Various possible flaws were suggested, and then the Opera scientists later...just as Feynman would council - reported some possible flaws in their experiment they’d discovered. Further investigation confirmed the flaws and that combined with others failing to replicate the results, dis-confirmed the initial "discovery."
Just as science should work - for either disconfirmation or confirmation.
Along those lines, in a much more modest level, I’ve tried to hew to these general principles when I’ve wanted to be more sure or rigorous about my conclusions.
For example I was curious about my Benchmark SS preamp I’d just bought vs my CJ tube preamp, in which the sonic differences seemed pretty obvious. Well...most here would say "of course they’d be obvious."
However, having done a variety of blind testing over the years - AC cables, video cables, DACs/CDPs, music servers - I’m familiar with how "obvious" sonic differences can feel under the influence of sighted bias - e.g., when you know what it is you are listening to. I’ve had "obvious" sonic differences vanish when I wasn’t allowed to know which was which. It’s very educational.
It was entirely possible that I could be perceiving a sonic difference because of my perception being swayed by those wonderful "warm, glowing tubes...of course it’s going to sound different!"
So, again, as Feynman would advise: the first rule is not to fool yourself as you are the easiest person to fool. And since I know sighted bias is a big variable, I attempted a blind test to reduce the possibility of "fooling myself." I took various other steps to reduce "fooling myself" - ensuring there wasn’t a way I could tell which preamp was being switched to, ensuring the switching was randomized, trying to ensure the levels were matched so as to account for loudness bias, etc.
When I did my best...once again in concert with what Feynman would advise...I presented the results for other people to critique:
https://www.audiosciencereview.com/forum/index.php?threads/blind-test-results-benchmark-la4-vs-conrad-johnson-tube-preamp.33571/
As Feynman advised, I made sure to add as much detail about my method as I could, INCLUDING areas where I thought flaws could arise. And then I answered every question, I could about my method, took some suggestions to double check certain aspects and looked at how others assessed the results.
It wasn’t a scientific-level of rigor, but I think it was in the spirit of the scientific mindset/approach in the sense of all the above.
So I think I get fairly close to walking-the-walk in such instances with some of my own testing.
I wonder if rodman or others can show any of their audio tests havea similar level of steps put in place to "not fool yourself" as well as presenting the results looking for others to critique?
This, btw, is also generally what Amir does. He presents his results with plenty of detail about his METHOD and RESULTS so there is plenty of information given on which people can critique the method or results. It's not just "I put this in my system and I heard X, trust me!" It's "here, YOU can look for yourself at my DATA to see if I'm wrong." He presents it to the more general public on his youtube channel, and in the ASR forum in which he knows there are plenty of technically informed people who can help catch problems. And this is what goes on at ASR all the time.