Why HiFi Gear Measurements Are Misleading (yes ASR talking to you…)


About 25 years ago I was inside a large room with an A-frame ceiling and large skylights, during the Perseid Meteor Shower that happens every August. This one time was like no other, for two reasons: 1) There were large, red, fragmenting streaks multiple times a minute with illuminated smoke trails, and 2) I could hear them.

Yes, each meteor produced a sizzling sound, like the sound of a frying pan.

Amazed, I Googled this phenomena and found that many people reported hearing this same sizzling sound associated with meteors streaking across the sky. In response, scientists and astrophysicists said it was all in our heads. That, it was totally impossible. Why? Because of the distance between the meteor and the observer. Physics does not allow sound to travel fast enough to hear the sound at the same time that the meteor streaks across the sky. Case closed.

ASR would have agreed with this sound reasoning based in elementary science.

Fast forward a few decades. The scientists were wrong. Turns out, the sound was caused by radiation emitted by the meteors, traveling at the speed of light, and interacting with metallic objects near the observer, even if the observer is indoors. Producing a sizzling sound. This was actually recorded audibly by researchers along with the recording of the radiation. You can look this up easily and listen to the recordings.

Takeaway - trust your senses! Science doesn’t always measure the right things, in the right ways, to fully explain what we are sensing. Therefore your sensory input comes first. You can try to figure out the science later.

I’m not trying to start an argument or make people upset. Just sharing an experience that reinforces my personal way of thinking. Others of course are free to trust the science over their senses. I know this bothers some but I really couldn’t be bothered by that. The folks at ASR are smart people too.

nyev

@thespeakerdude 

I think the test you gave yourself is too easy 😀  Your point is well taken in regards to trained listeners and MP3. I think a better test is to serve up 10 different tracks which may be MP3 or may be wave, and then test how well listeners do at accurately assessing if the track is compressed or not. I wonder if even the trained listeners will be challenged in that case without a reference.

I didn't give that test to myself.  I was challenged on a major forum by an objectivist to be able to tell MP3 from original with him claiming that no one could.  At the same time, there had been a challenged on that forum to tell 16 bit content from 24 bit.  Content for that was produced by AIX records which is well known for quality of its productions.  So to remove any appearance of bias in selection of material, I grabbed the clips from that test and compressed them to MP3.  And post those results.  The clip was not at all "a codec killer" where such differences are easier to hear.

On the type of test you mention, I am not a fan of them for the reason you mention.  It is harder to identify the original vs compressed that way because you have to now know what the algorithm does to create or hide sounds.  In other words, is an artifact part of the original content or was it removed.  

Our goal with listening tests should always be to try and find differences, not make it hard for people to find what is there.  Because once we know an artifact exists, we can fix it.  Making the test harder to pass goes counter to that.

That said, I and many others were challenged to such a test on the same major site above.  We were given a handful of clips and asked to find which is which.  Results were privately shared with the test conductor.  When I shared my outcome, he told me I did not do all that well!  I was surprised as I was sure two of the clips were identical and thought that was put in there as a control.

Fast forward to when the results are published and wouldn't you know it, I was "wrong."  We had a regular member with huge reputation for mixing soundtracks for major films and he got it "right."  Puzzled, I performed a binary comparison and showed that the two files were identical!  Test conductor was shocked.  He went and checked and found out that he had uploaded the same file twice!  He declared the test faulty and that was that.

Despite that, as you saw, I will repeat again, I don't want to make blind tests too hard on purpose.  We need to be interested as much in positive outcomes as negative.

 

 

@amir_asr

Problem is, when most of you are put to blind tests, you flunk being able to tell the source from the compressed one. And no, resolving system has nothing to do with it. The fact that you say that tells me you don’t know what it takes to hear such differences. As a trained listener in this domain, I can tell differences with just about any headphone on any system.

Yes, this is, I find, one of the most common myths among audiophiles. Whenever someone raises skeptical doubts about claims made about, say the audible character of cables, the response if the skeptic doesn’t agree is the "ears or gear" - either you don’t have the hearing acuity the Golden Eared Audiophile does, or your system just isn’t "resolving enough" like you need a "super resolving system" to hear these differences.

There are various problems with this idea:

1. Subtle sonic differences, if real, can be heard across a large range of transducers. Sure we can get to something like the worst laptop audio or whatever, but it really doesn’t take THAT much to produce speakers on which you can hear very subtle differences. I’ve worked in tons of different studio conditions, different monitors, headphones of varying quality and ALL have allowed me to hear and make the subtle changes I need to for my job. Give me an old pair of radio shack minimus 7 speakers and if I do a subtle EQ tweak to bring out the upper mids, you will hear it!

 

2. The type of sonic differences often ascribed to (for instance) cables is often fairly dramatic - the "highs opening up" the bass becoming more punchy or extended, more forward in tonal balance, more laid back etc. These are all qualities, if that obvious, should be audible on most speakers. It’s why different mastering is obvious on most speakers, from cheap to expensive.

 

3. We have audiophiles reporting these "obvious sonic differences" across a wide range of systems and speakers. It’s not just the well-heeled audiophiles with the Super Resolving Systems. Just go to the typical amazon page for some set of audiophile cables (even not expensive ones) and you’ll see audiophiles with very modest systems reporting "obvious differences" with the cables in question.

From this you have either two implications for those making the Resolving System demand:

A. It’s a red herring to demand that someone must own a Very Resolving System in order to evaluate whether a cable is making a difference.

Or:

B. If these differences aren’t obvious on less resolving systems, plenty of those audiophiles with modestly ’resolving’ systems are imagining they are hearing differences between cables. Which would only emphasize the problem variable of listener bias in the first place.

 

 

@andy2 

Amir has claimed that he personally listened to "200" pieces of equipment per years.  That is around 366/200 or 1.5 days per equipment.  I mean come on who in the world can take you seriously if you only spend such a short time evaluating an equipment.

In comparison, people at Stereophile spend weeks on an particular equipment before they publish the review.

They can spend months and it wouldn't make their reviews reliable.  If you know what you are doing, including science and engineering of the gear and what the measurements show, you can zoom in and find issues.  You don't sit there listen to random track after random music for weeks.  That tells you nothing.

Every one of my listening tests uses the same, revealing and proper tracks. I focus on what measurements say is wrong with the unit and test level of audibility.  A headphone amp that has too little power with high impedance gets tested that way.  And with content designed to find audible issues.

Same gear is tested by one of your favorite reviewers reads like a music review.  Oh listen to this album and that album.  What?  I want to know what the equipment is doing, not what music you listen to.  

Proper research has been performed to find such tracks.  See this:

You need to stop listening to your lay intuition and embrace science of how to do such evaluations correctly.  Formal testing shows long term listening to be much less revealing than instantaneous ones.  See this published research on that:

I implore to you start paying attention to decades of research on what it takes to properly evaluate audio gear.  The lay understanding and intuition stuff needs to go out the window.

@andy2 

That is because a good tube amp will cost a lot more money compared to a SS amp.  To get the same performance you need to spend quite a bit more.  If money is not an issue, most people would go with tube.

Back to the myth that money buys performance.  It can, but you only know it if you measure.

That aside, almost all content you listen to was created and approved by the talent using solid state electronics.  Is your claim that they heard it with haze?  If so, that haze must be part of the experience they want you to have!  Best to leave it just like guitar distortion.  :)

@amir_asr

They can spend months and it wouldn’t make their reviews reliable. If you know what you are doing, including science and engineering of the gear and what the measurements show, you can zoom in and find issues. You don’t sit there listen to random track after random music for weeks. That tells you nothing.

More time spent on a task can net better results. This is not always true. However, when it comes to listening (headphones, speakers, DACs, amps etc.) spending more time evaluating a product before releasing a review can help in a few key ways:

1) Testing for Reliability

2) Features & Functionality

3) Overall sound quality analysis

4) Small important details

I’m sure that most of these people doing reviews have a standard set of reference tracks; or at least a background/strong interest in audio; enough so to make their impressions reliable. They were hired to do a task and might be very good at it. We have no way of knowing how much audio knowledge they have...

Tyll Hertsens - of innerfidelity (now defunct) was probably the best reviewer of headphones on the net. Like I said in one of my discussions, he took time to describe what each headphone sounded like with a particular track. He also did measurements and highlighted key areas in the frequency response or octaves where performance could have been better...while still quoting the measurements he took. And on top of all this, comparing it headphones in the same price-range with the same factor - open/closed back!

Since he went above and beyond, this gives a potential customer huge insights! You can go ahead and test with the same track. You can wait a week, be really busy, visit an audio shop, listen to that headphone (or headphones) and jot down your impressions on the notepad app your phone has installed.

Then you can compare and contrast your review with his - figure out if this headphone is worth the money for you. I’m sure that anyone who has bought audio products based on solid reviews like his will agree...

Now back to reviews...I bought a SABAJ A10h based on your review. I also bought a DROP THX789 based on your review. In both cases, not only did I find that output power was severely lacking; each of them also had their own sound signature. Based on your measurements and overall write-up of both units, a potential buyer would actually believe that each of them were a wire with gain!

Please see my profile for a photo that illustrates this. Looking inside one of these devices tells you it is cheap to build. Uses an OP amp and tons of subtractive distortion limiting - like negative feedback in a circuit. Tons of this, much like dynamic range compression in mastering will limit perceived dynamic range in a track. You’ve got to wonder how they put something together at that price-point and sold it. You can easily look up parts by just looking inside a unit and doing a parts inventory check...they are both not state-of-the-art ! lol

Alright..you can have your cake and eat it too! All I’m saying is....live and let live. Your tone and how you almost bully people into listening to you is rather rude. Hence why virtually every audio forum on the web has labelled you all kinds of silly names.

You need to stop listening to your lay intuition and embrace science of how to do such evaluations correctly. Formal testing shows long term listening to be much less revealing than instantaneous ones. See this published research on that:

I implore to you start paying attention to decades of research on what it takes to properly evaluate audio gear. The lay understanding and intuition stuff needs to go out the window.