Some thoughts on ASR and the reviews


I’ve briefly taken a look at some online reviews for budget Tekton speakers from ASR and Youtube. Both are based on Klippel quasi-anechoic measurements to achieve "in-room" simulations.

As an amateur speaker designer, and lover of graphs and data I have some thoughts. I mostly hope this helps the entire A’gon community get a little more perspective into how a speaker builder would think about the data.

Of course, I’ve only skimmed the data I’ve seen, I’m no expert, and have no eyes or ears on actual Tekton speakers. Please take this as purely an academic exercise based on limited and incomplete knowledge.

1. Speaker pricing.

One ASR review spends an amazing amount of time and effort analyzing the ~$800 US Tekton M-Lore. That price compares very favorably with a full Seas A26 kit from Madisound, around $1,700. I mean, not sure these inexpensive speakers deserve quite the nit-picking done here.

2. Measuring mid-woofers is hard.

The standard practice for analyzing speakers is called "quasi-anechoic." That is, we pretend to do so in a room free of reflections or boundaries. You do this with very close measurements (within 1/2") of the components, blended together. There are a couple of ways this can be incomplete though.

a - Midwoofers measure much worse this way than in a truly anechoic room. The 7" Scanspeak Revelators are good examples of this. The close mic response is deceptively bad but the 1m in-room measurements smooth out a lot of problems. If you took the close-mic measurements (as seen in the spec sheet) as correct you’d make the wrong crossover.

b - Baffle step - As popularized and researched by the late, great Jeff Bagby, the effects of the baffle on the output need to be included in any whole speaker/room simulation, which of course also means the speaker should have this built in when it is not a near-wall speaker. I don’t know enough about the Klippel simulation, but if this is not included you’ll get a bass-lite expereinced compared to real life. The effects of baffle compensation is to have more bass, but an overall lower sensitivity rating.

For both of those reasons, an actual in-room measurement is critical to assessing actual speaker behavior. We may not all have the same room, but this is a great way to see the actual mid-woofer response as well as the effects of any baffle step compensation.

Looking at the quasi anechoic measurements done by ASR and Erin it _seems_ that these speakers are not compensated, which may be OK if close-wall placement is expected.

In either event, you really want to see the actual in-room response, not just the simulated response before passing judgement. If I had to critique based strictly on the measurements and simulations, I’d 100% wonder if a better design wouldn’t be to trade sensitivity for more bass, and the in-room response would tell me that.

3. Crossover point and dispersion

One of the most important choices a speaker designer has is picking the -3 or -6 dB point for the high and low pass filters. A lot of things have to be balanced and traded off, including cost of crossover parts.

Both of the reviews, above, seem to imply a crossover point that is too high for a smooth transition from the woofer to the tweeters. No speaker can avoid rolling off the treble as you go off-axis, but the best at this do so very evenly. This gives the best off-axis performance and offers up great imaging and wide sweet spots. You’d think this was a budget speaker problem, but it is not. Look at reviews for B&W’s D series speakers, and many Focal models as examples of expensive, well received speakers that don’t excel at this.

Speakers which DO typically excel here include Revel and Magico. This is by no means a story that you should buy Revel because B&W sucks, at all. Buy what you like. I’m just pointing out that this limited dispersion problem is not at all unique to Tekton. And in fact many other Tekton speakers don’t suffer this particular set of challenges.

In the case of the M-Lore, the tweeter has really amazingly good dynamic range. If I was the designer I’d definitely want to ask if I could lower the crossover 1 kHz, which would give up a little power handling but improve the off-axis response.  One big reason not to is crossover costs.  I may have to add more parts to flatten the tweeter response well enough to extend it's useful range.  In other words, a higher crossover point may hide tweeter deficiencies.  Again, Tekton is NOT alone if they did this calculus.

I’ve probably made a lot of omissions here, but I hope this helps readers think about speaker performance and costs in a more complete manner. The listening tests always matter more than the measurements, so finding reviewers with trustworthy ears is really more important than taste-makers who let the tools, which may not be properly used, judge the experience.

erik_squires

These guys are true enthusiasts....they love stereo and love music.

Every one one of us loves stereo and music. To be a reviewer, you need to know more than average listener about science and engineering of the technology you are reviewing. Sitting in front of the camera and posting audio illusions to make money from ads and sponsorships just spreads misinformation. You should be more on guard than this. Ask them to show you what training they have in being a critical listener. Ask them what formal experience they have. If they are just you with a camera, then that is useless.  Just because they believe in the same myths as you with respect to cost translating into sound fidelity, doesn't make them remotely correct.

Nope . That was in reference to all of the "listeners" here.

Ah, my apologies then.  :)

"Over 2 million people visit ASR every month"

As someone pointed out earlier in this thread 2 million clicks doesn't mean 2 million people

"The second is in the context of reviewing products. But even the second statement means many hours of listening given the 200 to 300 products I test every year"

What the 2nd statement suggests is that you do in fact listen, however the scope of your listening relative to reviews is limited in focus which is comes into play with what folks have been critical about in this thread

your 2nd statement

""I hear you but where do you draw the line? I listen to all speakers and headphones I review. I also listen to every headphone amplifier and portable DAC+HP amp I review. As you go further upstream, I listen less and less."

 

What the 2nd statement suggests is that you do in fact listen, however the scope of your listening relative to reviews is limited in focus which is comes into play with what folks have been critical about in this thread

That's because they don't understand the power of measurements and science of psychoacoustics showing how many are transparent to the source, obviating the need for listening tests.

When there are gray areas, or I suspect people will use this as an excuse to dismiss the review, I listen.  Here is an example of the latter, the Belden ICONOCLAST XLR Cable Review 

 

Iconoclast CLR Cable Listening Tests
I used two setups for listening tests: headphone and main 2-channel system:

Headphone Listening: source was a computer as the streamer using Roon player to RME ADI-2 Pro ($2K) acting as a DAC & headphone amplifier, driving my Dan Clark Stealth headphone ($4K). I started listening with Iconoclast cable. Everything sounded the same as I was used to. I then switched to WBC cable. Immediately I "heard" more air, more detail and better fidelity. This faded in a few seconds though and the sound was just as it was with the Iconoclast.

For my main system, I used a Topping D90SE driving the Topping LA90 which in turn drove my Revel Salon 2 speakers. I picked tracks with superb spatial qualities to judge the usual "soundstage." I again started with Iconoclast XLR TPC cable. I was once again blown away how good my system sounds. :) I don't get to enjoy it often enough given how much time I spend working at my desk. Anyway, after a while I switched to WBC cable. Once again, immediate reaction was that the sound was more open, bass was a bit more tight, etc. This too passed after a few seconds and everything sounded the same again.

I even performed a null test with music and linked to the files in the review.

Another example is the  Review of CHORD GROUNDARRAY "Noise" Filter/Grounding

 

This is a dongle you attach to unused ports on your system.  It has no circuit in it, passive or active.  It just takes the ground connection and terminates it in some material. It would violate the rules of the universe if it did what they claim!  Of course measurements showed that it did nothing.  Here are my listening tests:

Chord GroundArray Listening Tests
My standard workstation where I perform my testing is naturally connected to our home network where a lot of the data files come and go during the testing over a TP Link switch. It has 8 ports with a few unused ones so I plugged the GroundArray into one of them. Inserting the device is easy. Getting it out is not because the tab is then hidden enough that you can't push to unlock it. I had to use a screw driver to push the lock in to remove it.

I played my reference tracks using RME ADI-2 Pro as I inserted and then removed the GroundARAY. There was no difference whatsoever to my ears. To avoid the accusation that I don't want to hear a difference, I then performed a null test using member @pkane's DeltaWave program. Here, RME ADI-Pro is capturing its own output for analysis. I made two captures: one with and one without GroundARAY. Here is the spectrum of null (difference) result:
 

 

The little dongle costs a cool $795!  Imagine how many real things you could buy for that much money to improve your enjoyment of everyday life.

Should I waste my time constantly doing these listening tests when the results are so conclusive over and over again? 

Where is the responsibility of the company in all of this?  Why don't they assemble a group of audiophiles and test them properly to show these things make a difference?  Where is the real engineering and physics explanation of any of these things making a difference?

As I said, you all need to be more skeptical here.  There a ton of people taking advantage of your improper listening tests that results in every device making a difference no matter what they do.  All this energy put toward me producing more data on these devices yet you don't apply a fraction of that to companies that make these products to prove their claim.  To prove they know something, anything, about engineering.