Why do intelligent people deny audio differences?


In my years of audiophilia I have crossed swords with my brother many times regarding that which is real, and not real, in terms of differeces heard and imagined.
He holds a Masters Degree in Education, self taught himself regarding computers, enough to become the MIS Director for a school system, and early in life actually self taught himself to arrange music, from existing compositions, yet he denys that any differece exists in the 'sound' of cables--to clarify, he denies that anyone can hear a difference in an ABX comparison.
Recently I mentioned that I was considering buying a new Lexicon, when a friend told me about the Exemplar, a tube modified Dennon CD player of the highest repute, video wise, which is arguably one of the finest sounding players around.
When I told him of this, here was his response:
"Happily I have never heard a CD player with "grainy sound" and, you know me, I would never buy anything that I felt might be potentially degraded by or at least made unnecessarily complex and unreliable by adding tubes."

Here is the rub, when cd players frist came out, I owned a store, and was a vinyl devotee, as that's all there was, and he saw digital as the panacea for great change; "It is perfect, it's simply a perfect transfer, ones and zero's there is no margin for error," or words to that effect.
When I heard the first digital, I was appalled by its sterility and what "I" call 'grainy' sound. Think of the difference in cd now versus circa 1984. He, as you can read above resists the notion that this is a possibility.
We are at constant loggerheads as to what is real and imagined, regarding audio, with him on the 'if it hasn't been measured, there's no difference', side of the equation.
Of course I exaggerate, but just the other day he said, and this is virtually a quote, "Amplifiers above about a thousand dollars don't have ANY qualitative sound differences." Of course at the time I had Halcro sitting in my living room and was properly offended and indignant.
Sibling rivalry? That is the obvious here, but this really 'rubs my rhubarb', as Jack Nicholson said in Batman.
Unless I am delusional, there are gargantual differences, good and bad, in audio gear. Yet he steadfastly sticks to his 'touch it, taste it, feel it' dogma.
Am I losing it or is he just hard headed, (more than me)?
What, other than, "I only buy it for myself," is the answer to people like this? (OR maybe US, me and you other audio sickies out there who spend thousands on minute differences?
Let's hear both sides, and let the mud slinging begin!
lrsky
Text books, and the study they relate to, teach us how to learn, as well as some facts and techniques. Most of the electronics technology current when I went to school is obsolete today, but the open-minded but systematic approach to learning that I was taught is still valid, and, at last report, Ohm's law still applies. One cannot chase after every fool idea that comes down the pike, and science can help identify the ones that just might be valid for further study. Could science miss a good one? Sure. But some crazy guy will try it anyway, and become a hero.

By the way, how are we doing with cold fusion?
I'd suggest this approach.

Amongst the audiophile community, there is a very significant statistical majority that there are audible differences in cables. These are people who have done all kinds of listening tests in their home environments, and many would have preferred to not spend any unnecessary money.

These differences are statistically significant enough to comprise a valid observed phenomenon, over a disparate group of individuals.

Now, the scientific response should be that since existing electrical testing methodology has only shown minor differences,and that A/B/X testing has not determined anything sufficient, that there must be some other testing methodology found to either support or refute this widespread observation.

Case in point: When optical communications networks are used, fiber-optic cables carry the signals. Electricity is applied to one driver, and comes out the other end's receiver as electricity(of course opto-couplers are used in this case, but bear me out). If I took that fiber-optic cable and tested it for electrical characteristics, it would seem that it wouldn't even carry any electricity, and it won't. But that doesn't mean that signals are not carried on it. You have to design your testing protocol to measure what you are trying to determine. When we add in the opto-couplers and know(ahead of time) that we are transmitting light signals with couplers on both ends, then we can measure the performance adequately. Similarly, we don't really know for sure(and this whole thread bears this out) what we are trying to measure. All we know is that the existing measuring techniques are apparently not adequate to account for a statistically significant and widespread observation.

So, one way to deal with it, is to just "dismiss" it as folly, or imagination. The other way is to figure out why the tests are inadequate, and determine new tests that actually can make some headway to finding out how to measure what is so commonly observed. The first step in this is to try to determine what the cables are doing that is not in our testing.

If every scientist dismissed everything that could not be readily measured at the time, we wouldn't know anything at all. Measurements are made to quantify observed phenomenon. Anything that is a statistically significant occurrence, justifies further investigation to find tests that can quantify it, whether they be electrical tests or acoustic tests, or whatever.

Something is going on here with these cables, and it would behoove us to find out what it is, and why it is.
>>Nevertheless, that doesn't mean timing is not an issue where differences in cables are concerned.<<

But, it also doesn't mean that it is an issue, either. Most of these "issues" -- like the myth about "roll-off" are passed around like a rumor, but are either easily debunked or there is no evidence -- outside of cable advertising -- to support the notion that they actually exist.

Cable advertisers dream up maladies, create insecurity in audio consumers, then give them the cure for the malady they've dreamed up.
Now, the scientific response should be that since existing electrical testing methodology has only shown minor differences,and that A/B/X testing has not determined anything sufficient, that there must be some other testing methodology found to either support or refute this widespread observation.

The scientific approach is to build on prior knowledge, not to ignore it. Prior knowledge tells us why cables sound different--sometimes it's physics, sometimes it's psychology. If you cannot accept this, you're free to try to disprove it. Good luck.
On another thread, member Aball mentioned that the French and German governments are collaborating on a research project to find out why there are differences between what is currently measured, and what is heard.

Apparently, according to what Aball read, there is some kind of micro-corona effect around wires, which interacts with the surrounding atmosphere or dielectric, causing ionization effects, that they have discovered. He reports that this effect differs with varying applications. This collaboration has evidently produced a " measuring box", which can measure this in some way.

It is interesting to see that efforts are underway to explain this phenomenon.