Some thoughts on ASR and the reviews


I’ve briefly taken a look at some online reviews for budget Tekton speakers from ASR and Youtube. Both are based on Klippel quasi-anechoic measurements to achieve "in-room" simulations.

As an amateur speaker designer, and lover of graphs and data I have some thoughts. I mostly hope this helps the entire A’gon community get a little more perspective into how a speaker builder would think about the data.

Of course, I’ve only skimmed the data I’ve seen, I’m no expert, and have no eyes or ears on actual Tekton speakers. Please take this as purely an academic exercise based on limited and incomplete knowledge.

1. Speaker pricing.

One ASR review spends an amazing amount of time and effort analyzing the ~$800 US Tekton M-Lore. That price compares very favorably with a full Seas A26 kit from Madisound, around $1,700. I mean, not sure these inexpensive speakers deserve quite the nit-picking done here.

2. Measuring mid-woofers is hard.

The standard practice for analyzing speakers is called "quasi-anechoic." That is, we pretend to do so in a room free of reflections or boundaries. You do this with very close measurements (within 1/2") of the components, blended together. There are a couple of ways this can be incomplete though.

a - Midwoofers measure much worse this way than in a truly anechoic room. The 7" Scanspeak Revelators are good examples of this. The close mic response is deceptively bad but the 1m in-room measurements smooth out a lot of problems. If you took the close-mic measurements (as seen in the spec sheet) as correct you’d make the wrong crossover.

b - Baffle step - As popularized and researched by the late, great Jeff Bagby, the effects of the baffle on the output need to be included in any whole speaker/room simulation, which of course also means the speaker should have this built in when it is not a near-wall speaker. I don’t know enough about the Klippel simulation, but if this is not included you’ll get a bass-lite expereinced compared to real life. The effects of baffle compensation is to have more bass, but an overall lower sensitivity rating.

For both of those reasons, an actual in-room measurement is critical to assessing actual speaker behavior. We may not all have the same room, but this is a great way to see the actual mid-woofer response as well as the effects of any baffle step compensation.

Looking at the quasi anechoic measurements done by ASR and Erin it _seems_ that these speakers are not compensated, which may be OK if close-wall placement is expected.

In either event, you really want to see the actual in-room response, not just the simulated response before passing judgement. If I had to critique based strictly on the measurements and simulations, I’d 100% wonder if a better design wouldn’t be to trade sensitivity for more bass, and the in-room response would tell me that.

3. Crossover point and dispersion

One of the most important choices a speaker designer has is picking the -3 or -6 dB point for the high and low pass filters. A lot of things have to be balanced and traded off, including cost of crossover parts.

Both of the reviews, above, seem to imply a crossover point that is too high for a smooth transition from the woofer to the tweeters. No speaker can avoid rolling off the treble as you go off-axis, but the best at this do so very evenly. This gives the best off-axis performance and offers up great imaging and wide sweet spots. You’d think this was a budget speaker problem, but it is not. Look at reviews for B&W’s D series speakers, and many Focal models as examples of expensive, well received speakers that don’t excel at this.

Speakers which DO typically excel here include Revel and Magico. This is by no means a story that you should buy Revel because B&W sucks, at all. Buy what you like. I’m just pointing out that this limited dispersion problem is not at all unique to Tekton. And in fact many other Tekton speakers don’t suffer this particular set of challenges.

In the case of the M-Lore, the tweeter has really amazingly good dynamic range. If I was the designer I’d definitely want to ask if I could lower the crossover 1 kHz, which would give up a little power handling but improve the off-axis response.  One big reason not to is crossover costs.  I may have to add more parts to flatten the tweeter response well enough to extend it's useful range.  In other words, a higher crossover point may hide tweeter deficiencies.  Again, Tekton is NOT alone if they did this calculus.

I’ve probably made a lot of omissions here, but I hope this helps readers think about speaker performance and costs in a more complete manner. The listening tests always matter more than the measurements, so finding reviewers with trustworthy ears is really more important than taste-makers who let the tools, which may not be properly used, judge the experience.

erik_squires
Post removed 

"If you all just learned how to properly test equipment so that only the fidelity is being evaluated, then these arguments would all go away.  Instead, you keep doing faulty testing, with all manner of mistakes and biases and arrive at conclusions that are not supported by any science or engineering."
What a nasty, arrogant condescending piece of work you are.

My initial read on the comment paper is that at sufficiently high sampling rates for the FFT, the effect goes away, though the authors are using a windowing Fourier x-form that I am not fully familiar with or the implications of the free gamma parameter that they set to the variance of the initial pulse.

There's a rich literature on using accumulation methods in image processing to overcome Fourier uncertainty and one implication might be that it's not so much a nonlinear effect but just stimuli accumulation in our cochlea and the neural systems that process the data.

I'll do a bit more digging but it looks increasingly like (a) might be untrue and (b) might also be untrue in my new syllogism.

I tested two Schiit Yggdrasils, finding design errors in them. Company disputed that so a third person volunteered his unit. In doing so, he told me he had bought a Topping and it did not sound as good. He gave me the model number and precise tracks he had used for that testing, and the fact that he had used Stax headphones. I own Stax headphones, and said Topping DAC and same music in high-res (what he had used).

First thing I had to do was match levels as out of box levels were not the same, invalidating any such listening test. After I did that, the two DACs sounded identical in AB tests. The Topping cost 10% of the Yggdrasils.

I was also told that the Yggdrasils needs to warm up. So I left it on for days, measuring it along the way. Its performance never changed.

Again, I duplicated his listening tests to the letter, except that I was careful to match levels when he had not done.

If you all just learned how to properly test equipment so that only the fidelity is being evaluated, then these arguments would all go away. Instead, you keep doing faulty testing, with all manner of mistakes and biases and arrive at conclusions that are not supported by any science or engineering.

Dude (facepalm), the fact that you sat around with headphones comparing that Schiit with something else....no, you have a lot to learn. For starters, I could show you a comparison on a couple of dacs, a good one and a crappy one i have in storage with one of my rigs (NOT HEADPHONES) and it is flipping night and day obvious how one one of them produces a flatass soundfield and the other one doesn’t. You are too stuck in your hole with your headphones and sinad for anything to...., no, I am not going to waste effort bothering to explain anything. I’ll pass a blind comparison 25/25 times or 50/50 times or how many ever flipping times (done it before) in my room (not in your garage) on the test tracks I recorded/will provide. While you chase the dumb didi sinad, I’ll chase the software and dsp instead that’s more meaningful to me.

On the same note, I certainly didn’t join this forum to try and flex intellectually all day against senior citizens from other lines of work....like you’ve doing for pages. But, since that is all you seem to wanna do, I’ve hinted here before that I am a business owner. I own a engineering firm (fab/test floors whatever dude), I’m in the business of producing precision electronics and electromechanical components for some entities. We use million dollars of test equipment, ndt, whatever, the likes of which you will never see or hear about in life. There is nothing you could possibly say that doesn’t sound like simpleton sht to me. I am sure you know about certain types of engg disciplines where you would get embarassed/get schooled very quickly. If I start talking to you about high F high V thermal runaway whatever crap black art circuit to you, nobody on this thread including you will have a clue. So, just simmer down with the flex. Do it on your forum instead.

Y’know, there are guys on my payroll too (i know your kind) who are these younger engg grunts that would talk just like you, possibly. "How could two Fing circuits measure the same, sound different?! Wait, wait, circuits have a sound?!?!" while some of the phds may at least think amd try to keep their mouths shut. Once upon a time, I used to think that way a bit perhaps...But, i got schooled by some audio overlords and it opened my mind. NO, you will never see anything like that in your EE electives or your goofy lil text book.

An older guy like yourself...you should have had opportunities in life to wisen up over time, gather the humility to admit that there is sht that’s hard to explain, I’d think.... but, you certainly havn’t or it is this fake facade you’ve been putting up all day. Either way, I’ll see right through it...I conclude that you have no field experience and it is a waste of time to try and say anything to you Carry on, try and dazzle the Agon senior citizens some more with a few more of your simpleton charts.

@markwd Since I dove in, I have to deep dive! Not definitive, but an interesting data point:”

- well done, markwd: after all the prevarication and paltering, you finally found something that allows you to question the 2013 article. I cannot say I fully understand the full substance of it, but I agree it appears to have basis for disagreement the study, and measurement has bought itself some breathing room. 

- in any case, have you tried diving deep enough on the other debate of what high fidelity actually means? We shouldn’t be selective over diving now…you know, once we have realised we did actually have a dog in the fight, no? : )

 

In friendship - kevin