The Audio Science Review (ASR) approach to reviewing wines.


Imagine doing a wine review as follows - samples of wines are assessed by a reviewer who measures multiple variables including light transmission, specific gravity, residual sugar, salinity, boiling point etc.  These tests are repeated while playing test tones through the samples at different frequencies.

The results are compiled and the winner selected based on those measurements and the reviewer concludes that the other wines can't possibly be as good based on their measured results.  

At no point does the reviewer assess the bouquet of the wine nor taste it.  He relies on the science of measured results and not the decidedly unscientific subjective experience of smell and taste.

That is the ASR approach to audio - drinking Kool Aid, not wine.

toronto416

Erin showcased a significant problem with himself and with ASR. For those that don't know, Erin is a part of ASR. He's also purely a measurement guy, with a bit of subjective listening in his reviews.

Here's what happened. There's a Mcintosh amp and a pair of Mono class D amplifier that are db-tuned. Cannot be more different on measurements. Virtually apples and oranges. And the issue? Erin cannot tell the difference. Not a single difference. 

What is the point of all these data when the end user can't tell the difference? You might say, well it's just Erin, but I've come across many people that can't tell the difference. People at AVS. 

Respect to Erin, at least he doesn't try to hide the truth and says it like it is. HUGE RESPECT. If Amir did the same test he would fail just as miserably. 

@laoman Interesting product. Thanks for bringing it up. Bruno is a measurement-first engineer. Anything he makes is guarantee to measure well, but is not a guarantee they will sound good.

There is so much discrepancy between how the Tambaqui sounds, vs its pricetag, vs its measurement. I’m gonna make an argument and say it’d have made more sense for everyone, Amir included, if the Tambaqui measured bad instead.

Full disclosure: I have not heard the Tambaqui

1) Based on owners impression and reviews. The Tambaqui performs on the level of Chord Dave, and DCS Bartok. The pricetag reflects their performance as well. Chord Dave - $14,400. Tambaqui - $13,400. Bartok $20,950. Bartok is 50% more but I digress (I've been told Bartok used to be very close in price to the Tambaqui).

2) Bartok measures BAD. Dave measures BAD. Tambaqui measures GOOD. Huh?

3) Topping D90se - $900. Measures GOOD. It measures so good that it is nearly identical to the Tambaqui. $900 vs $13,400. Nearly the same measurements. Huh?

4) I’ve owned the D90se. It sounded bad, subjectively bad. There is just no way the D90se would sound as good as the Tambaqui despite the measurements. The measurements for these 2 products make no sense, no sense in price, no sense in performance.

So to conclude, the measurements make no sense, Amir once again proves his data is meaningless. 3 products of similar performance, 2 measured poorly, 1 measured great. Makes no sense. Logical conclusions cannot be found at ASR nor from Amir. The only thing that made sense here is the pricetag (kind of).

 

This is what Amir had to say at the end of his Tambaqui review,

"Since I am not the one paying for it for you to purchase it, it is not my issue to worry about the cost. As such, I am happy to recommend the Mola Mola Tambaqui DAC based on its measured performance and functionality."

He’s happy to recommend the Tambaqui when the D90se can be had for $900. Such a humorous clown. 

Laugh out loud, roll on floor laughing again until I poop my pants. 

Do not buy anything based on ASR recommendations.    Buy based on whether you like the sound.  And there are plenty of products they bash soley on specs that actually sound excellent.  So don't pass on a product just because they trash it.  .   That's my take away.    

 

 

This post has acquired an interesting and persistent energy. ASR is clearly neither trivial nor unpersuasive or why would so many try to denounce it?

I occasionally go to ASR to read their reviews. Their contributors don't appear to be particularly reductionist or dogmatic. If you know what they are using for testing, you can take that as a data point and move on. Their peanut gallery in their comment sections are what you will find anywhere, people who opine based on the review and not anything more.

What no site appears to do is true blind listening tests using a standard setup for 2 channel audio, and using self-validating methods (e.g. testing the same system twice to look for variation of the listener's attention and judgment.)