Good read: why comparing specifications is pointless


 

“ … Bitrates, sampling rates, bit sizes, wattages, amplifier classes…. as an audio enthusiast, there are countless specifications to compare. But it is – virtually – all meaningless. Why? Because the specifications that matter are not reported ánd because every manufacturer measures differently. let’s explain that...”

 

 

akg_ca

I'd like to slag off most of the posters on this thread.

From what they say, it just seems like the right thing to do....

Translation:  try to be objective and less personal.

On point.  Comparing specifications is utterly pointless unless you also listen to the gear.  If you listen, then specifications have relevance.

@juanmanuelfangioii , don’t you think our guest from ASR has done a fantastic job confirming the title of this thread, basically proving why specs are pointless.

1) He is reviewing speakers in a room that is barren wood so you know whatever the specs say they won’t sound like that in his room (or any other gear played in that room for that matter).

2) He is reviewing home theater gear without actually using it in a home theater so he can only imagine what "the specs"  sound like in real world conditions, LOL.

3) He can’t/won’t fix any of his problems it so the specs will never help, no matter what they are, so essentially, they won’t matter right?

 

Amir,

I am sure you are well intended. But you provide evidence that you do not understand what quality audio is all about in your posts.

You listen to half of the units? and process 300 units per year? I would not begin to consider evaluating a single new component without listening to it for a couple months. This would be only after being completely familiar with my system without change for months… many months. This establishes a base line of a sound you understand at all levels. This is a reference system. A professional reviewer will spend months evaluating a single component.

Have you read professional reviews? Reviewers have systems they understand inside and out. Then they spend, what a hundred… sometimes several hundred hours listening. The complexity of sound reproduction is layer upon layer of nuance. Which is why a whole glossary of terminology to describe the nuances of sound reproduction exists. Rhythm and pace, micro-details…etc.

It now makes sense how your charts match your perceived quality. The sonic evaluation is so cursory that all you pick up is the very gross highest level characteristics of the sound. This is not at all what high performance audio is about. It is about communicating the full breath and depth of the musical experience… not the wire-frame representation.

When I attend a live symphony it can be so emotionally moving that it brings tears to my eyes. I am left breathless in the beauty created by the music. My audio system can do that. This is what the pursuit is about. Doing this requires incredible dedication in people that produce components to achieve this, and in evaluation of sound far in excess of a few variable, and in choosing and assembling a system. Great components come from designers that work by listening to their products long after they have run out of variables to measure.

 

Why would you even bother coming here Amir?

Anything you say bears no weight in any way to my audio choices. We are at polar opposites of how to choose equipment.

You have your loyal followers on ASR. I don’t go on there and criticise your beliefs. If I joined up and said what I believe, I would be booted out. Go back to where people appreciate what you say, and reinforce what you believe to be true. Hopefully a few others here follow you.

@ghdprentice 

 I would not begin to consider evaluating a single new component without listening to it for a couple months.

I appreciate that you think you need that much time to evaluate a component.  But you and I are not similarly situated for many reasons:

1. I understand the full design and architecture of what I am testing.  This allows me to focus on what their weakness and strengths are.  An example of the former is a powered speaker.  These routinely have amplifiers that run out of gas before their drivers do.  So I test for that.  I am not just shooting in the dark thinking any and all things need to be evaluated.

2. I use measurements which help immensely with #1 above.  They show me objectively and reliability where I need to look.  If a speaker has a dip in 2 kHz, I use equalization to fill that.  I then perform AB testing to determine how audible that is.  Measurements are quick.  Electronics/tweaks take an afternoon.  Speakers take about a day.  With that in hand, and knowledge of the product, I am able to make very rapid progress in listening tests.

3. I have tested well over 1000 devices in the last 3 to 4 years.  That has enabled me to build methods and systems for fast and reliable comparisons.  For example, I have special music tracks that instantly tell me how well a speaker reproduces sub-bass.  I know how the competitors to the speakers perform relative to what I am testing. 

Audiophiles and "professional reviewers" throw random music at equipment with no aim or direction.  So no wonder it takes them so much longer to learn something about the product.  At the end they may just be guessing.  

4. I am professionally trained critical listener.  I also know psychoacoustics and research in this area that says long-term testing is completely unreliable.  See the digest of this AES paper:

Here is the punchline there:

The results were that the Long Island group [Audiophile/Take Home Group] was unable to identify the distortion in either of their tests. SMWTMS's listeners also failed the "take home" test scoring 11 correct out of 18 which fails to be significant at the 5% confidence level. However, using the A/B/X test, the SMWTMS not only proved audibility of the distortion within 45 minutes, but they went on to correctly identify a lower amount. The A/B/X test was proven to be more sensitive than long-term listening for this task.

Or if you are more comfortable with video, a complete tutorial in listener training, my ability find small impairments and explanation of above paper:

 

5. Adaptation.  Our brain adapts to its environment.  Think of the your computer fan running.  After a bit, you forget about it.  This is adaptation in play.  Same thing happens with say, a speaker that is bright.  Listen to it for a while and you adapt and no longer think it is bright.  It becomes the "new normal."  This is why speakers rank the same in formal studies regardless of the room they are tested in. Your brain learns to listen through the room.  From point of view of reviewing, you want to give the true nature of the sound, not what you have adapted to.

Dr. Toole explains this effect very well in his wonderful book:

I could go on but I hope you get the message that I follow the science and research in what I do.  What you and other reviewers do is based on lay impressions and what others have told you.  You have no proof point that you are creating reliable results.  Indeed, research shows as I post earlier, that professional reviewers are terribly unreliable in their assessment of speaker sound.

So you do what you want to do.  But unless you can prove your methodology to be right, and better, there is no argument here.