The invention of measurements and perception


This is going to be pretty airy-fairy. Sorry.

Let’s talk about how measurements get invented, and how this limits us.

One of the great works of engineering, science, and data is finding signals in the noise. What matters? Why? How much?

My background is in computer science, and a little in electrical engineering. So the question of what to measure to make systems (audio and computer) "better" is always on my mind.

What’s often missing in measurements is "pleasure" or "satisfaction."

I believe in math. I believe in statistics, but I also understand the limitations. That is, we can measure an attribute, like "interrupts per second" or "inflamatory markers" or Total Harmonic Distortion plus noise (THD+N)

However, measuring them, and understanding outcome and desirability are VERY different. Those companies who can do this excel at creating business value. For instance, like it or not, Bose and Harman excel (in their own ways) at finding this out. What some one will pay for, vs. how low a distortion figure is measured is VERY different.

What is my point?

Specs are good, I like specs, I like measurements, and they keep makers from cheating (more or less) but there must be a link between measurements and listener preferences before we can attribute desirability, listener preference, or economic viability.

What is that link? That link is you. That link is you listening in a chair, free of ideas like price, reviews or buzz. That link is you listening for no one but yourself and buying what you want to listen to the most.

E
erik_squires
Two things. What value are measurements of anything if it sound different in every room? And how can we measure audiophile goals like soundstage, air, musicality?
A good example of this is the redbook standard set in the late 70s early 80s for the then emerging CD format. The standard was fine, but it took until the late 90s to figure out that distortion in the time domain (jitter) was a major factor that prevented the unmeasurable enjoyment factor from CD playback. Once it was identified and measured, designers solved, or at least found ways to manage, much of this "new" type of distortion.

And to Geoff’s point above, IMO, the room accounts for at least 50% of the sound we hear from our systems.
A good example of this is the redbook standard set in the late 70s early 80s for the then emerging CD format. The standard was fine, but it took until the late 90s to figure out that distortion in the time domain (jitter) was a major factor

Jitter is interesting. I mean, yes, we can certainly point to it as one measurement that has improved over time, and Redbook had a markedly big jump in audible performance in the last 10 years.

Is that enough? I mean, we never proved it really, and we don't actually have any idea of what is audible, or if there are other parameters around jitter which are important. I mean, AFAIK, there's not even an agreement from manufacturer to manufacturer as to how exactly jitter is measured.

So if jitter IS the problem ... what is inaudible?


Actually, jitter is a problem just not the only problem.    Industry knew it was a problem in the very early days, there was a lot of discussion on how much was too much.   The fact is, the best jitter removal back then was not enough.
There are numerous ways of taking measurements in a room when one is putting something into production.   But for the audiophile, you only need your ears.  :-)
Jitter is not the root cause. Jitter is the result/manifestation of several independent issues/causes. Bit stuffing presumably occurs when errors occur during the optical read process that can’t be corrected by Reed Solomon EDEC.

What’s curious, though, as far as I know and please, someone correct me if I’m wrong, when the input bit stream and the output bit stream are compared *for normal uncorrected conditions* there are very few errors. If that’s true then why is it so audible?