@nyev
@amir_asr , thank you for sharing your perspectives. I have an education in computer engineering. But being an audiophile, I just don’t agree with the position that the science and measurements can totally explain our perceptions. I have a fundamental belief that science cannot explain all of the dimensions that impact our subjective interpretation of physical sound waves. Why? You have suggested folks get upset when ASR rejects a component that they subjectively praise. I think you are correct in many, many cases. It’s why people get so fired up about ASR. In my case I don’t care if ASR rejects a component that I subjectively enjoy - that doesn’t bother me in the slightest because if I enjoy it that’s all that matters to me. So why do I follow my subjective judgement over science? I simply don’t believe that science can FULLY explain, at our present level of understanding, how sound waves are subjectively interpreted by humans.
First, thank you for kind attitude in asking this question. :) Much appreciated.
On your point, it is very true that we don't understand why we perceive what we perceive. Advancing that knowledge though is domain of neurologists who want to diagnose disease. The science we follow is psychoacoustics which is the "what" we near and don't hear. If needed we do, we do draw from neuroscience but in general, we don't need to.
Example, we know most people can't hear above 20 kHz. The why has to do with the design of ear. But we don't need to know that. We simply conduct controlled tests and find out the highly non-linear frequency response of our hearing. And then we use that to build things like lossy audio codecs (MP3, AAC, etc.) which work remarkably well in fooling people into thinking they are hearing high fidelity sound. Again, we can look at features or of our hearing like IHC, filter banks, etc. but we don't need to, to build a loss codec.
By the same token, we can measure a device's electrical characteristics and then determine if they fall below our threshold of hearing. Once they do, the what is what matters, not the why. We can declare the device transparent.
Now, I should note that listening tests play a huge rule in audio science. Every speaker and headphone I test relies on decades of psychoacoustics and controlled testing to develop the target responses. Again, it doesn't why we are the way we are. In speaker testing for example, we all seem to like a neutral response even though we have no idea how the music was mixed and mastered! We have an interesting compass inside us that says deviations from flat on-axis response is not preferable.
To be sure our hearing is complex. For example there is a feedback loop from the brain to the hearing system to seek out information in a noisy environment. This is the so called "cocktail party effect" where we can hear people talking to use even though there is so much background noise with others talking. The brain dynamically creates filters to get rid of what you don't want to hear, and hear what you want to hear.
This causes problem in audio testing. You listen to product A. Then you go and listen to product B, hoping to find a difference. Your brain obeys your orders and will tune your hearing differently. All of a sudden you hear a darker background. Details become obvious that were not. None of this is a function of device B however. It is happening because you have knowledge of what you are trying to do, and use it to hear things differently in a comparison.
Due to understanding of above, we perform testing blindly. Once you don't know which is which, your brain can't bias the session. Actually it tries but we run enough trials to find out if that is a random thing, or due to actual audible differences.
So as you see, we understand what we need to understand to determine fidelity of audio products. Said products are not magical. They have no intelligence Measurements as such, powerfully tell us what they are doing.