Did Amir Change Your Mind About Anything?


It’s easy to make snide remarks like “yes- I do the opposite of what he says.”  And in some respects I agree, but if you do that, this is just going to be taken down. So I’m asking a serious question. Has ASR actually changed your opinion on anything?  For me, I would say 2 things. I am a conservatory-trained musician and I do trust my ears. But ASR has reminded me to double check my opinions on a piece of gear to make sure I’m not imagining improvements. Not to get into double blind testing, but just to keep in mind that the brain can be fooled and make doubly sure that I’m hearing what I think I’m hearing. The second is power conditioning. I went from an expensive box back to my wiremold and I really don’t think I can hear a difference. I think that now that I understand the engineering behind AC use in an audio component, I am not convinced that power conditioning affects the component output. I think. 
So please resist the urge to pile on. I think this could be a worthwhile discussion if that’s possible anymore. I hope it is. 

chayro

@mahgister 

The point on which i disgree with Amir is not his measures set usefulness,  is exclusively  about extending a set of measures as synonymous with sound perceived qualities because...

And you don't care, no matter how many times I have stated it, that the above is NOT my position.  :(

Measurements tell you if a system is deviating from perfection in the form of noise and distortion and neutral tonality.  This, we want to know because they are opposite of what high fidelity is about.  We want transparency to what is delivered on the recording.

When measurements show excess noise and distortion, that is that.  The system has those things and if they rise to point of audibility, you hear them.  Best to get a system that minimizes that so you don't have to become an expert in psychoacoustics to predict audibility.

Your argument needs to be that given two perfectly measuring system, one will sound better the other.  To which I say fine, show it in an ears only, controlled listening test.  Don't tell me what a designer thinks will happen.  Just show it with a listening test. 

You say the ears are the only thing that can judge musicality but when I ask you for such testing, you don't have one and instead you quote words for me or what is wrong with measurement.  We want evidence of the hypothesis you have.  Not repeated statement of the hypothesis.

BTW, if such a controlled test did materialize, it would be trivial to create a measurement to show the difference.  We will then know what it is that is observed.  When you don't have anything to show from what was tested, what music was used, what listeners observed reliably, etc. there is nothing there to analyze.

@mahgister your points are valid to state. I’m not as savvy on audio science. I admit that. I also admit that science is really important with audio gear. Just as it is with medicine and improving peoples vision for example. 

My analogy isn’t scientific but is based in fact. You cannot strip out the subjectivity of audio.

No one is trying to take out the subjectivity from audio.  The entire science of speaker and headphone testing relies on it extensively.  Problem with using the ear in evaluating things is that it can be difficult to do it properly.  So what to do?  Give up and let any and all anecdotes rule the world? No.  We research and find out what measurements correlate with listening results.  Once there we use the measurements because they are reliable, repeatable and not subject to bias.

If there is doubt about measurements, we always welcome listening tests.  We only ask that they be proper: levels matched and ears be the only senses uses.

Just like you can’t do it with food or anything to do with taste. You can’t measure taste. You can’t quantify it but it is there. 

Per above, many times we can quantify it.  The entire field of psychoacoustics is about that: *measuring* human hearing perception.  You just need to do properly as I keep saying it.  Food research is done that way with blind tests.  There are no controversies there.  But somehow audio is special.

Audiophiles hugely underestimate the impact of confounding elements in audio evaluation. Reminds of some research that was done in Wine tasting.  Tasters were given two identical wines but told one cost $10 and another $90.  Here is the outcome:

"For example, wine 2 was presented as the $90 wine (its actual retail price) and also as the $10 wine. When the subjects were told the wine cost $90 a bottle, they loved it; at $10 a bottle, not so much. In a follow-up experiment, the subjects again tasted all five wine samples, but without any price information; this time, they rated the cheapest wine as their most preferred."

See how strongly price comes into the equation here and how removing that aspect in a controlled test was the key to arriving at the truth of what tasted better?

They go on to say:

"Previous marketing studies have shown that it is possible to change people's reports of how good an experience is by changing their beliefs about the experience. For example, says Rangel, moviegoers will report liking a movie more when they hear beforehand how good it is. "Our study goes beyond that to show that the neural encoding of the quality of an experience is actually modulated by a variable such as price, which most people believe is correlated with experienced pleasantness," he says."

As you see, we are wired this way to pollute our observation with what we think in advance of such tests.  It reasons then that if we want to know the truth about audio performance, that all these other factors are eliminated.  Otherwise we would be judging the price, etc. and not the sound.

Of course without going to school for a day, marketing and engineers alike in audio have learned the above.  They know that all they have to do is have a good story and high price and sale is made.  No need for any stinking controlled test proving anything.  Just say it, folks get preconditioned, and you are done.

Sorry, no.  JA's measurements assume you flush mount the speaker in an infinite wall.  No stand alone speaker is used that way. As such, his measurements overexaggerate the bass energy.  JA states the same: "The usual excess of upper-bass energy due to the nearfield measurement technique, which assumes that the radiators are mounted on a true infinite baffle, ie, one that extends indefinitely in both horizontal and vertical planes, is absent."

Once again, JA makes it clear its a nearfield measurement without correction, it's up to the viewer to read his speaker measurements section. And you to read your own website where long time speaker designers explain.

https://www.audiosciencereview.com/forum/index.php?threads/how-to-make-quasi-anechoic-speaker-measurements-spinoramas-with-rew-and-vituixcad.21860/#post-726171

There is no way for you to predict where a speaker is located in a room as to provide any diffraction loss compensation. 

That's because you don't know what baffle diffraction loss is, it's based purely on the size/shape of the baffle, relative to wavelengths, not "location in room".

And again, ultimately, correction/EQ  below transition must be made based IN ROOM, not anechoic. The nearfield and/or anechoic is of limited use other than to compare speaker vs speaker in terms of extension. EQ will be needed regardless of how measurement is presented.

JA's measurements are fine and often done in situ, unlike yours, Genelec, Neumann, PSB, Revel, etc.

He's not bringing an NFS to his reviewers home. His quasi-anechoic on/off axis >300hz or so and nearfield below, along with in room (mostly) are suffice. Claiming that he needs an NFS is petty. Voecks also did just fine for your Salon 2s without.

NFS is a great tool, but certainly not mandatory for knowledgeable designers.

After several tests, the designer then made a conscious decision to let it be, since it sounded much better in its original untamed state after extensive listening tests. This is what many of us mean by "listen first and then measure". Putting more emphasis on listening and what sounds best as a means to an end, rather than making graph lines flat.

Oh I perfectly know what you mean. Before starting Audio Science Review, I co-founded a forum specifically focused on high-end audio.  Folks there spend more on audio tweaks than most of you spend on your entire system there!  That is where @daveyf and I met.  So there is nothing you need to tell me about audiophile behaviors this way.  I know it.

Here is the problem: there is no proof point that the assertion of said designer is true.  You say he did "extensive listening tests." I guarantee that you have no idea what that testing was let alone that it was extensive.  What music was used?  What power level?  What speakers?  How many listeners?  What is the qualifications of the designer when it comes to hearing impairments? 

Story is told and believed.  Maybe it is true.  Maybe it is not.  After all, if he saw a significant measurement error, logic says the odds of it sounding good is low.  After all, why else would you tell that story?  If the odds are low, then we better have a documented, controlled test that shows that.  Not just something told.

BTW, the worse person you want to trust in these things is the person with a vested interest.  I don't mean this in a derogatory way.  Designer just want to defend their designs and be right.  So we best not put our eggs in that basket and ask for proof.

I post this story from Dr. Sean Olive before but seems I have to repeat it. When he arrived from National Research Council to Harman (Revel, JBL, etc.), he was surprised at the strong resistance of both engineering and marketing people at the company:

To my surprise, this mandate met rather strong opposition from some of the more entrenched marketing, sales and engineering staff who felt that, as trained audio professionals, they were immune from the influence of sighted biases.

[...]

The mean loudspeaker ratings and 95% confidence intervals are plotted in Figure 1 for both sighted and blind tests. The sighted tests produced a significant increase in preference ratings for the larger, more expensive loudspeakers G and D. (note: G and D were identical loudspeakers except with different cross-overs, voiced ostensibly for differences in German and Northern European tastes, respectively. The negligible perceptual differences between loudspeakers G and D found in this test resulted in the creation of a single loudspeaker SKU for all of Europe, and the demise of an engineer who specialized in the lost art of German speaker voicing).

You see the problem with improper listening tests and engineer opinions of such products?  

These people shun science so much that they never test their hypothesis of what sounds good.  Not once they put themselves in a proper listening test.  Because if they did, they would sober up and quick!  Such was the case with me...

When I was at the height of my listening acuity at Microsoft and could tell that you flushed your toilet two states away :), my signal processing manager asked me if I would evaluate their latest encoder with their latest tuning.  I told him it would be faster if he gave me those tuning parameters and I would optimize them with listening and give him the numbers.

I did that after a couple of weeks of testing.  The numbers were floating point (had fractions) and I found it necessary to go way deep, optimizing them to half a dozen decimal places.  I gave him the numbers and he expressed surprise telling me they don't use the fractions in the algorithm!  That made me angry as I could hear the difference even when changing 0.001.  I told him the difference was quite audible and I could not believe he couldn't hear them.

This was all in email and next thing I know he sent me a link to two sets of encoded music files and asked me which sounded better.  I quickly detected one was clearly better and matched my observations above. I told him in no uncertain terms that one set was better.  Here is the problem: he told me the files were identical!  

I could not believe it.  So I listened again and the audible difference was there clear as a day.  So I perform a binary test only to find that the files were identical. Sigh.  I resigned my unofficial position as the encoder tuner.  :)

This is why I plead with you all to test your listening experiences in proper test.  Your designer could have easily done that.  He could have built two versions of that amp, matched their levels and performed AB tests on a number of audiophiles blind.  Then, if the outcome was that the less well measuring amp was superior, I would join him to defend it!