IM Distortion, Speakers and the Death of Science


One topic that often comes up is perception vs. measurements.

"If you can't measure it with common, existing measurements it isn't real."

This idea is and always will be flawed. Mind you, maybe what you perceive is not worth $1, but this is not how science works. I'm reminded of how many doctors and scientists fought against modernizing polio interventions, and how only recently did the treatment for stomach ulcers change radically due to the curiosity of a pair of forensic scientists.

Perception precedes measurement.  In between perception and measurement is (always) transference to visual data.  Lets take an example.

You are working on phone technology shortly after Bell invents the telephone. You hear one type of transducer sounds better than another.  Why is that?  Well, you have to figure out some way to see it (literally), via a scope, a charting pen, something that tells you in an objective way why they are different, that allows you to set a standard or goal and move towards it.

This person probably did not set out to measure all possible things. Maybe the first thing they decide to measure is distortion, or perhaps frequency response. After visualizing the raw data the scientist then has to decide what the units are, and how to express differences. Lets say it is distortion. In theory, there could have been a lot of different ways to measure distortion.  Such as Vrms - Vrms (expected) /Hz. Depending on the engineer's need at the time, that might have been a perfectly valid way to measure the output.

But here's the issue. This may work for this engineer solving this time, and we may even add it to the cannon of common measurements, but we are by no means done.

So, when exactly are we done?? At 1? 2? 5?  30?  The answer is we are not.  There are several common measurements for speakers for instance which I believe should be done more by reviewers:

- Compression
- Intermodulation ( IM ) Distortion
- Distortion

and yet, we do not. IM distortion is kind of interesting because I had heard about it before from M&K's literature, but it reappeared for me in the blog of Roger Russel ( http://www.roger-russell.com ) formerly from McIntosh. I can't find the blog post, but apparently they used IM distortion measurements to compare the audibility of woofer changes quite successfully.

Here's a great example of a new measurement being used and attributed to a sonic characteristic. Imagine the before and after.  Before using IM, maybe only distortion would have been used. They were of course measuring impedance and frequency response, and simple harmonic distortion, but Roger and his partner could hear something different not expressed in these measurements, so, they invent the use of it here. That invention is, in my mind, actual audio science.

The opposite of science would have been to say "frequency, impedance, and distortion" are the 3 characteristics which are audible, forever. Nelson pass working with the distortion profile, comparing the audible results and saying "this is an important feature" is also science. He's throwing out the normal distortion ratings and creating a whole new set of target behavior based on his experiments.  Given the market acceptance of his very expensive products I'd say he's been damn good at this.

What is my point to all of this?  Measurements in the consumer literature have become complacent. We've become far too willing to accept the limits of measurements from the 1980's and fail to develop new standard ways of testing. As a result of this we have devolved into camps who say that 1980's measures are all we need, those who eschew measurements and very little being done to show us new ways of looking at complex behaviors. Some areas where I believe measurements should be improved:

  • The effects of vibration on ss equipment
  • Capacitor technology
  • Interaction of linear amps with cables and speaker impedance.

We have become far too happy with this stale condition, and, for the consumers, science is dead.
erik_squires
Many years ago, more than fifteen, I read an article in Stereophile. It was about analysis of songs that became hits or something like that. Music that became successful. If I remember correctly, and I really do not remember details, it was some computer program that analyzed music and found certain patterns in successful works. It turned out Norah Jones’ Come Away With Me album, out about that time, checked those boxes, too, despite not sounding anything similar to others if listened to it.

I am sure such things have developed way more since then and I am not 100% sure I got it all right, but the gist was that.

EDIT: https://www.stereophile.com/news/013105HSS/index.html?qt-related_posts=0

https://www.theguardian.com/music/2005/jan/17/popandrock
but its inherent problem is similar to what medical researchers call "evidence-based medicine," resulting in a backward-looking bias that precludes real innovation.

It's refreshing to realize that even with the advent of massive data analysis tools like HSS, the randomness of the marketplace still can't be controlled.

I think AI will complement human beings, not replacing.  Like my example of brain scan, AI will help doctor better diagnose early sign of cancer, but the doctor will have the final say in the matter.  
*smirk* Regarding TP, have any one of you heard of a washlet?

As long as that remains ’outside’ of being Bluetoothed, we’re vaguely safe in that ’environment’.....

What one ought to be concerned about is an AI’d dildo.

In that scenario....we become obsolete.

No worms, either....*L*

Fear the Future....;)....pleasant dreams, goodnight....
As I understand it AI is all about recognizing patterns and being able to learn from mistakes when provided with new information. Right away the advantages over humans should be obvious. 
able to learn from mistakes when provided with new information
That's not the same as "creativity".  Even if AI can learn, it can only learn within the confine of the given "algorithm".  An AI that was programmed to do brain scan, can't learn how to cook since cooking was not part of the original program.  

That's the main difference with AI and human.  AI learning is not the same as human learning.