Some thoughts on ASR and the reviews


I’ve briefly taken a look at some online reviews for budget Tekton speakers from ASR and Youtube. Both are based on Klippel quasi-anechoic measurements to achieve "in-room" simulations.

As an amateur speaker designer, and lover of graphs and data I have some thoughts. I mostly hope this helps the entire A’gon community get a little more perspective into how a speaker builder would think about the data.

Of course, I’ve only skimmed the data I’ve seen, I’m no expert, and have no eyes or ears on actual Tekton speakers. Please take this as purely an academic exercise based on limited and incomplete knowledge.

1. Speaker pricing.

One ASR review spends an amazing amount of time and effort analyzing the ~$800 US Tekton M-Lore. That price compares very favorably with a full Seas A26 kit from Madisound, around $1,700. I mean, not sure these inexpensive speakers deserve quite the nit-picking done here.

2. Measuring mid-woofers is hard.

The standard practice for analyzing speakers is called "quasi-anechoic." That is, we pretend to do so in a room free of reflections or boundaries. You do this with very close measurements (within 1/2") of the components, blended together. There are a couple of ways this can be incomplete though.

a - Midwoofers measure much worse this way than in a truly anechoic room. The 7" Scanspeak Revelators are good examples of this. The close mic response is deceptively bad but the 1m in-room measurements smooth out a lot of problems. If you took the close-mic measurements (as seen in the spec sheet) as correct you’d make the wrong crossover.

b - Baffle step - As popularized and researched by the late, great Jeff Bagby, the effects of the baffle on the output need to be included in any whole speaker/room simulation, which of course also means the speaker should have this built in when it is not a near-wall speaker. I don’t know enough about the Klippel simulation, but if this is not included you’ll get a bass-lite expereinced compared to real life. The effects of baffle compensation is to have more bass, but an overall lower sensitivity rating.

For both of those reasons, an actual in-room measurement is critical to assessing actual speaker behavior. We may not all have the same room, but this is a great way to see the actual mid-woofer response as well as the effects of any baffle step compensation.

Looking at the quasi anechoic measurements done by ASR and Erin it _seems_ that these speakers are not compensated, which may be OK if close-wall placement is expected.

In either event, you really want to see the actual in-room response, not just the simulated response before passing judgement. If I had to critique based strictly on the measurements and simulations, I’d 100% wonder if a better design wouldn’t be to trade sensitivity for more bass, and the in-room response would tell me that.

3. Crossover point and dispersion

One of the most important choices a speaker designer has is picking the -3 or -6 dB point for the high and low pass filters. A lot of things have to be balanced and traded off, including cost of crossover parts.

Both of the reviews, above, seem to imply a crossover point that is too high for a smooth transition from the woofer to the tweeters. No speaker can avoid rolling off the treble as you go off-axis, but the best at this do so very evenly. This gives the best off-axis performance and offers up great imaging and wide sweet spots. You’d think this was a budget speaker problem, but it is not. Look at reviews for B&W’s D series speakers, and many Focal models as examples of expensive, well received speakers that don’t excel at this.

Speakers which DO typically excel here include Revel and Magico. This is by no means a story that you should buy Revel because B&W sucks, at all. Buy what you like. I’m just pointing out that this limited dispersion problem is not at all unique to Tekton. And in fact many other Tekton speakers don’t suffer this particular set of challenges.

In the case of the M-Lore, the tweeter has really amazingly good dynamic range. If I was the designer I’d definitely want to ask if I could lower the crossover 1 kHz, which would give up a little power handling but improve the off-axis response.  One big reason not to is crossover costs.  I may have to add more parts to flatten the tweeter response well enough to extend it's useful range.  In other words, a higher crossover point may hide tweeter deficiencies.  Again, Tekton is NOT alone if they did this calculus.

I’ve probably made a lot of omissions here, but I hope this helps readers think about speaker performance and costs in a more complete manner. The listening tests always matter more than the measurements, so finding reviewers with trustworthy ears is really more important than taste-makers who let the tools, which may not be properly used, judge the experience.

erik_squires

@amir_asr Please point out in the link where it says audio measurements are not able to keep up with the human ear:

  • nice to hear from you once again, amir. Here we go, highlighted in bold below -

For the first time, physicists have found that humans can discriminate a sound’s frequency (related to a note’s pitch) and timing (whether a note comes before or after another note) more than 10 times better than the limit imposed by the Fourier uncertainty principle. Not surprisingly, some of the subjects with the best listening precision were musicians, but even non-musicians could exceed the uncertainty limit. The results rule out the majority of auditory processing brain algorithms that have been proposed, since only a few models can match this impressive human performance.

The researchers, Jacob Oppenheim and Marcelo Magnasco at Rockefeller University in New York, have published their study on the first direct test of the Fourier uncertainty principle in human hearing in a recent issue of Physical Review Letters.

The Fourier uncertainty principle states that a time-frequency tradeoff exists for sound signals, so that the shorter the duration of a sound, the larger the spread of different types of frequencies is required to represent the sound. Conversely, sounds with tight clusters of frequencies must have longer durations. The uncertainty principle limits the precision of the simultaneous measurement of the duration and frequency of a sound.

  • ive put as much of it into context as possible. The first highlight determines that human hearing can outperform up to ten times, the limit set by the uncertainty principle. The second highlight in bold simply describes that the accuracy of simultaneous measurement of both frequency and timing is limited by the Fourier uncertainty principle. If you study the context of the first statement in relation to the second, it is clear that measurements currently cannot explain what is heard by the human ear. As with the Heisenberg principle of uncertainty at the subatomic scale, the smaller, or more nuanced particles or sound information gets, there is a limit to what we can currently measure, because all our current measuring instruments are linear, meaning they operate sequentially, or in discrete packets.
  • At the subatomic scale, and its equivalent in relation to music in its every nuance, it’s impossible to tie down the location of any particle (or specific frequency) in relation to its speed (or timing) because of the absolutely continuous nature of movement. We can do so in relation to a car, or even a golf ball, because there are so many points in the huge space of a car or that golfball to tie a location to at any one moment in time. But, the moment we get into scales of that single unrelenting point, there is no possible way to rationally address its location with its movement, because even before the instant of the instant we have identified its location, it will have moved. There would be range of points, a range of uncertainty, as to where that point could be, hence the limit. It is only when we get to broader strokes, bigger items, grander scales, that the limit doesn’t apply, obviously, since the measurement of precise location can be sloppy, it will still be somewhere in the space of the object. Now, it could be said that Fourier uncovered the principle of uncertainty before Heisenberg, who then formulated it in relation to quantum mechanics and popularised it. But the vital matter is that any kind of measurement currently known to us is still limited by the uncertainty principle.
  • The uncertainty principle applies in acoustics and music, and not merely audio signals, in the deepest complexity and greatest nuance that music is. As such, when someone says they hear something that measurements do not indicate, they may not be blindly led by confirmation bias - because human hearing is non-linear, meaning we hear in continuum and not by way of sequential little jumps, we are able to detect nuance that no instrument can, being limited by linearity. At the scales of what we are discussing, of the tiniest moments of transition in relation to singular frequencies, the human ear still understands frequency simultaneously with timing in ways no instrument can measure or record.

In friendship - kevin.

@amir_asr 

Now amir, I trust there still is enough of the rationalist in you to back down in the face of a freshly discovered truth. From previous exchanges, I know well how cleverly you hunker down to prevaricate, conflate and, as nonoise puts it so well, use sophistry of language, graphs, readings, and…well, measurements, to deflect, twist and palter your way out of critical discourse, so I now put it to you plainly: belief forms such a vital part of our inner system of existence, its collapse can sometimes create such drastic change which a mind may not be able to accept. In the face of an unacceptable new truth, one of three things can happen - the first results in such a blow to the original belief, the individual in question in unable to reconcile the fresh truth with a way forward, and decides to end it all in suicide. In the second outcome, the individual succumbs to denial, hunkers down in aggressive statement after restatement of his/her belief and the false processes that validate that belief. But there is a third.

Now, it is beyond argument that linear testing equipment cannot accurately measure frequency simultaneously with time past a certain limit. 

It is also beyond argument that human hearing can exceed the Fourier limit of uncertainty, at times by a factor of ten.

This puts everything you have arrogantly stood for since you started asr into the bin of falsehood. 

Your measurements will always have their place, to inform and educate, but every imperious and contemptuous statement you ever made of those who have depended on their basic hearing, never mind developed listening skills; everything you did with your measurements to support your beliefs; every single listening test you have ever done; everything statement you ever made in conclusion of your tests; everything….has been false.

I hope I know you well enough that you will not succumb to the first outcome of collapsed belief, given the immensity of this new truth. 

And you may well, all history considered, hunker down and irrelevantly once again refer to linear measurements to argue against logical concepts of uncertainty that involve non linearity. 

Or you could chose the third outcome, which is one of acceptance - to graciously admit there are some things you may not have considered, in your religious fervour to be always proven right; that you are human, just like the rest of us, and might have made a rare but egregious mistake; and that you ask to take some time out to consider the weight of true science and the logic you are fighting against. This third outcome will be welcomed, even if there will be many aggrieved audiophiles who bore the weight of your contempt or indoctrination all these years. The third outcome will be welcomed for the simple fact you are intelligent, and you do fight for your beliefs (even if too arrogantly), but mostly for the fact that you do make good contribution to all audiophiles with your measurements as a good, if not brilliant technician. And my deeper hope is that you could actually grow to serve science, in all its amazing duality of rationalism and empiricism

I, for one, would love to read one day about your having invented a measuring machine that exceeds the Fourier uncertainty principle.

 

In friendship - kevin

@kevn Well, very expansive but we actually can develop systems to do such measurements using precisely the same approach that nature uses: nonlinear systems. It's not terrifically mysterious; we do it all the time in optical systems that shift frequencies just like the heterodyning that is described in the paper. I will admit that the mathematics is quite challenging based on experience. Being nonlinear we sometimes have to use things like spectral analysis (wow, strangely familiar) to look at solution families.

But the problem is that no one has actually successfully applied any of this to designing audio equipment! Or at least no one has demonstrated that to be the case!

@amir_asr “Please point out in the link where it says audio measurements are not able to keep up with the human ear:

  • nice to hear from you once again, amir. Here we go, highlighted in bold below -

@kevn 

There is absolutely nothing in there about audio measurements in general, or it being worse than the human hearing.  You have cut and paste unrelated things.

As i have explained, many audio measurements are done without any Fourier analysis.  SINAD, SNR, THD+N vs frequency, frequency response, etc. are all done simply by measuring voltages and levels.  No transform of any kind.

When we do use Fourier transform in measurements, we can choose any length and arrive at frequency resolution far better than human hearing.  When doing so, we are not at all interested in timing as the input is constant.

As @markwd has properly explained, the main usefulness of this study is in developing models of human hearing and how they need to take into account its non-linearity in this regard. It is not in any way, shape or form about the usefulness of audio equipment measurements.

I explained all of this in detail before. Please don't keep repeating the same thing by copying stuff from the article, which by the way, is NOT the paper itself.

I, for one, would love to read one day about your having invented a measuring machine that exceeds the Fourier uncertainty principle.

A "measuring machine" can be built to mimic human hearing and produce the same results as that study.  But this has nothing to do with measuring audio equipment.  There, we are not trying to analyze human hearing but the transparency of a piece of equipment. 

The audio equipment is NOT attempting to analyze what it is hearing.  Nor is its measurements.  As such, none of this study applies to analysis of audio gear, or its measurements.

There is only one specific case in audio where we want to show both timing and frequency.  That is the waterfall/CSD plot.  I include that in every one of my speaker measurements.  Take this review of Genelec 8361A (a superb studio monitor):

 

Notice how the frequency and time are presented at the same time.  Depending on the number of points used, we can make either X axis higher resolution, or Y, or balance the two.  Depending on what phenomenon we are are interested in, we optimize one or the other.  As a general rule, I highly recommend people to not look at this specific measurement as it can vary that way.

Outside of this one example (whose information you can extract from others), there are no other audio measurements where we are trying to simultaneously look at time and frequency resolution.  It is always the latter that we care about meaning we can highly optimize for frequency resolution, blowing away human acuity by a mile.  Look at the review of this Schiit Vidar 2 Amplifier:

And this multitone test that uses FFT:

Your ear has no prayer of hearing those tine spikes.  It simply hears them as background noise, reducing practical dynamic range.  This is because I have used whopping 256,000 points to make that measurement allowing incredible resolution that is able to show those spikes.  The test signal keeps repeating so we don't care about its timing.