Some thoughts on ASR and the reviews


I’ve briefly taken a look at some online reviews for budget Tekton speakers from ASR and Youtube. Both are based on Klippel quasi-anechoic measurements to achieve "in-room" simulations.

As an amateur speaker designer, and lover of graphs and data I have some thoughts. I mostly hope this helps the entire A’gon community get a little more perspective into how a speaker builder would think about the data.

Of course, I’ve only skimmed the data I’ve seen, I’m no expert, and have no eyes or ears on actual Tekton speakers. Please take this as purely an academic exercise based on limited and incomplete knowledge.

1. Speaker pricing.

One ASR review spends an amazing amount of time and effort analyzing the ~$800 US Tekton M-Lore. That price compares very favorably with a full Seas A26 kit from Madisound, around $1,700. I mean, not sure these inexpensive speakers deserve quite the nit-picking done here.

2. Measuring mid-woofers is hard.

The standard practice for analyzing speakers is called "quasi-anechoic." That is, we pretend to do so in a room free of reflections or boundaries. You do this with very close measurements (within 1/2") of the components, blended together. There are a couple of ways this can be incomplete though.

a - Midwoofers measure much worse this way than in a truly anechoic room. The 7" Scanspeak Revelators are good examples of this. The close mic response is deceptively bad but the 1m in-room measurements smooth out a lot of problems. If you took the close-mic measurements (as seen in the spec sheet) as correct you’d make the wrong crossover.

b - Baffle step - As popularized and researched by the late, great Jeff Bagby, the effects of the baffle on the output need to be included in any whole speaker/room simulation, which of course also means the speaker should have this built in when it is not a near-wall speaker. I don’t know enough about the Klippel simulation, but if this is not included you’ll get a bass-lite expereinced compared to real life. The effects of baffle compensation is to have more bass, but an overall lower sensitivity rating.

For both of those reasons, an actual in-room measurement is critical to assessing actual speaker behavior. We may not all have the same room, but this is a great way to see the actual mid-woofer response as well as the effects of any baffle step compensation.

Looking at the quasi anechoic measurements done by ASR and Erin it _seems_ that these speakers are not compensated, which may be OK if close-wall placement is expected.

In either event, you really want to see the actual in-room response, not just the simulated response before passing judgement. If I had to critique based strictly on the measurements and simulations, I’d 100% wonder if a better design wouldn’t be to trade sensitivity for more bass, and the in-room response would tell me that.

3. Crossover point and dispersion

One of the most important choices a speaker designer has is picking the -3 or -6 dB point for the high and low pass filters. A lot of things have to be balanced and traded off, including cost of crossover parts.

Both of the reviews, above, seem to imply a crossover point that is too high for a smooth transition from the woofer to the tweeters. No speaker can avoid rolling off the treble as you go off-axis, but the best at this do so very evenly. This gives the best off-axis performance and offers up great imaging and wide sweet spots. You’d think this was a budget speaker problem, but it is not. Look at reviews for B&W’s D series speakers, and many Focal models as examples of expensive, well received speakers that don’t excel at this.

Speakers which DO typically excel here include Revel and Magico. This is by no means a story that you should buy Revel because B&W sucks, at all. Buy what you like. I’m just pointing out that this limited dispersion problem is not at all unique to Tekton. And in fact many other Tekton speakers don’t suffer this particular set of challenges.

In the case of the M-Lore, the tweeter has really amazingly good dynamic range. If I was the designer I’d definitely want to ask if I could lower the crossover 1 kHz, which would give up a little power handling but improve the off-axis response.  One big reason not to is crossover costs.  I may have to add more parts to flatten the tweeter response well enough to extend it's useful range.  In other words, a higher crossover point may hide tweeter deficiencies.  Again, Tekton is NOT alone if they did this calculus.

I’ve probably made a lot of omissions here, but I hope this helps readers think about speaker performance and costs in a more complete manner. The listening tests always matter more than the measurements, so finding reviewers with trustworthy ears is really more important than taste-makers who let the tools, which may not be properly used, judge the experience.

erik_squires

I like to summarize online discussions to try to tease out what the central points and realizations are. Often they amount to the collective desires of the participants to simply participate, which is understandable. I read the Roon community for specific details on new Roon Ready certifications, the fate of MQA, and technical details about using MUSE. I read ASR for reviews of new (and old) products and how they objectively perform, as well as new insights about the science and engineering of audio systems.

Here at Audiogon, however, I just check in after I get a Friday summary list and really don't see much new information at all. We have committed listeners who tell long narratives about how trusted friends told them about a product, or how everyone should try some new cables, and how listening convinced them of this or that. But what we don't get is any real or actionable information beyond "If you liked Nordblost's Mjolgurniator III, just wait until you can trade up to MRT's Fusionator 3000!"

I've yet to discover something novel beyond the brief deep-dive that I participated in above (and was prompted by the prolific and occasionally challenging @mahgister) concerning how exactly listening and measurements might have a divergence...at least currently.

Now I might be biased slightly towards novel and actionable information that has some depth to it based on my background and passions, but I am curious what other contributors get out of all this bashing and clashing, promoting and diminishing?

I'm curious why folks argue and contribute here besides the obvious commercial interests of dealers cultivating sales (a bit of a sad and fundamentally small market to be captive to, alas)? I'm developing a book on the topic, so any insights/confessions/realizations are of interest to me!

@markwd  “Well, fair enough, but you have not demonstrated that human hearing exceeds those measurements for music listening purposes!

  • I’ve not needed to demonstrate anything, markwd, it was already demonstrated in the test I’ve been trying to bring your attention to the past three days ; )

@markwd “If we had just one great ABX test that showed me wrong, I would be thrilled because that would pave the way to something new.

@markwd All those dynamics would dance again and the mad scientists who brought the systems to life would be celebrated for rightly finding a path towards a new audio Xanadu

  • those ‘mad’ scientists have been doing it for years already; giving us such an amazing variety of analogue and digital playback equipment, the mind boggles. We do already have an audio Xanadu! If only you’d set your rational side of measurement and signal fidelity aside for a moment to take all the wonders of high fidelity into your amazing empirical and non-linear ears!

@markwd  “I'm certainly indoctrinated in the epistemic humility to be as careful as possible in assessing ideas, my own and those of others who hope but have not fully honed those hopes with the calm clarity of rationality.”

  • oh, that’s more than clear to me and everyone else here; and I’m glad you referred to it as epistemic, and not scientific, humility.  Perhaps you could extend that humility to the other half of science you’ve so neglected - empirical humility is just as, if not more important than epistemic, or rational humility.

@markwd  “I’ll note also that I think you may be misinterpreting the Fourier uncertainty principle in this particular context as I and Amir have mentioned to @mahgisterin several contexts. The authors are showing that if you used Fourier analysis as a model for human hearing there are limits to its applicability because there is likely nonlinear bucketing that allows for discrimination of time/frequency in excess of what a linear system is capable of.”

  • in fact, markwd, i would wager you have not understood the Fourier uncertainty principle in totality. The Fourier uncertainty principle cannotapply as a model for human hearing for the simple reason it is merely there to explain the limitation of signal measurements, not the limitations of human hearing - ie - measuring equipment, being linear, cannot exceed the limits of the uncertainty principle; human hearing, being non-linear, constantly does.

@markwd  “The speculation is that the fine acuity is derived from evolutionary pressures and the mechanics of it are due to the shape of the cochlea

  • yes, the very shape which is believed to be the reason why human hearing surpasses the Fourier uncertainty principle; you’re preaching to the choir but to promote falsehood, not the truth - that is, measurements cannot match the human ability to hear the nuance of both frequency and timing simultaneously, at the levels of resolution music is about.

@markwd  “…but I can change "edge" to "newly found" to remove any stigma the term invokes!”

  • while you’re at it, you will want to remove the ‘little’ as well, there is nothing little about something that puts the measurement vs hearing issue to rest than this. Markwd, clarity of communication is everything 😔

 

Markwd, for more, please refer to my coming reply to amir’s question to me. 

 

In friendship - kevin

@kevn

Weird misunderstanding going deep!

(1) There is no ABX test that shows that a measurably transparent audio system is distinguishable from another based on additional properties related to nonlinear perception capabilities.

(2) I still think you are misunderstanding the thrust of the paper: they are arguing that a way of modeling human hearing in the past has been to isomorphically map it to a linear frequency breakdown of the signal that is spatially spread within the cochlea (see Kunchur, etc. for additional details). The difficulties of this are that tapering of the cochlea (at least) leads to nonlinear phenomena. The FUP is a mathematical idealization that limits the simultaneous resolution of time and frequency for a linear system and they use it to explain how nonlinear systems can overcome this resolution in this case.

(3) Per my previous points about simulation as the gold standard, here’s the skeleton of your argument:

(a) Human hearing surpasses the ability of linear Fourier systems to resolve micro-phenonema in timing/frequency.

(b) Since FFT measurements use linear Fourier systems they may present measurements of audio systems that do not include this micro-phenomena.

(c) Therefore, using human hearing to design audio systems may achieve improved results over the FFT results shown by measurements.

(d) And, in conclusion, show me any such realizations via ABX tests that demonstrate that a designer was successfully able to tune a system to, in fact, achieve (c).

(e) But, per my previous posts, there is the opportunity to also improve the measurement apparatus to account for the discrepancy or to develop an impactful theory about how nonlinear cochlear phenomena might add to the music listening experience. These do not exist and therefore there is no path yet.

Well, I think I’ve repeated myself 3-4 times!

@amir_asr Please point out in the link where it says audio measurements are not able to keep up with the human ear:

  • nice to hear from you once again, amir. Here we go, highlighted in bold below -

For the first time, physicists have found that humans can discriminate a sound’s frequency (related to a note’s pitch) and timing (whether a note comes before or after another note) more than 10 times better than the limit imposed by the Fourier uncertainty principle. Not surprisingly, some of the subjects with the best listening precision were musicians, but even non-musicians could exceed the uncertainty limit. The results rule out the majority of auditory processing brain algorithms that have been proposed, since only a few models can match this impressive human performance.

The researchers, Jacob Oppenheim and Marcelo Magnasco at Rockefeller University in New York, have published their study on the first direct test of the Fourier uncertainty principle in human hearing in a recent issue of Physical Review Letters.

The Fourier uncertainty principle states that a time-frequency tradeoff exists for sound signals, so that the shorter the duration of a sound, the larger the spread of different types of frequencies is required to represent the sound. Conversely, sounds with tight clusters of frequencies must have longer durations. The uncertainty principle limits the precision of the simultaneous measurement of the duration and frequency of a sound.

  • ive put as much of it into context as possible. The first highlight determines that human hearing can outperform up to ten times, the limit set by the uncertainty principle. The second highlight in bold simply describes that the accuracy of simultaneous measurement of both frequency and timing is limited by the Fourier uncertainty principle. If you study the context of the first statement in relation to the second, it is clear that measurements currently cannot explain what is heard by the human ear. As with the Heisenberg principle of uncertainty at the subatomic scale, the smaller, or more nuanced particles or sound information gets, there is a limit to what we can currently measure, because all our current measuring instruments are linear, meaning they operate sequentially, or in discrete packets.
  • At the subatomic scale, and its equivalent in relation to music in its every nuance, it’s impossible to tie down the location of any particle (or specific frequency) in relation to its speed (or timing) because of the absolutely continuous nature of movement. We can do so in relation to a car, or even a golf ball, because there are so many points in the huge space of a car or that golfball to tie a location to at any one moment in time. But, the moment we get into scales of that single unrelenting point, there is no possible way to rationally address its location with its movement, because even before the instant of the instant we have identified its location, it will have moved. There would be range of points, a range of uncertainty, as to where that point could be, hence the limit. It is only when we get to broader strokes, bigger items, grander scales, that the limit doesn’t apply, obviously, since the measurement of precise location can be sloppy, it will still be somewhere in the space of the object. Now, it could be said that Fourier uncovered the principle of uncertainty before Heisenberg, who then formulated it in relation to quantum mechanics and popularised it. But the vital matter is that any kind of measurement currently known to us is still limited by the uncertainty principle.
  • The uncertainty principle applies in acoustics and music, and not merely audio signals, in the deepest complexity and greatest nuance that music is. As such, when someone says they hear something that measurements do not indicate, they may not be blindly led by confirmation bias - because human hearing is non-linear, meaning we hear in continuum and not by way of sequential little jumps, we are able to detect nuance that no instrument can, being limited by linearity. At the scales of what we are discussing, of the tiniest moments of transition in relation to singular frequencies, the human ear still understands frequency simultaneously with timing in ways no instrument can measure or record.

In friendship - kevin.