The invention of measurements and perception


This is going to be pretty airy-fairy. Sorry.

Let’s talk about how measurements get invented, and how this limits us.

One of the great works of engineering, science, and data is finding signals in the noise. What matters? Why? How much?

My background is in computer science, and a little in electrical engineering. So the question of what to measure to make systems (audio and computer) "better" is always on my mind.

What’s often missing in measurements is "pleasure" or "satisfaction."

I believe in math. I believe in statistics, but I also understand the limitations. That is, we can measure an attribute, like "interrupts per second" or "inflamatory markers" or Total Harmonic Distortion plus noise (THD+N)

However, measuring them, and understanding outcome and desirability are VERY different. Those companies who can do this excel at creating business value. For instance, like it or not, Bose and Harman excel (in their own ways) at finding this out. What some one will pay for, vs. how low a distortion figure is measured is VERY different.

What is my point?

Specs are good, I like specs, I like measurements, and they keep makers from cheating (more or less) but there must be a link between measurements and listener preferences before we can attribute desirability, listener preference, or economic viability.

What is that link? That link is you. That link is you listening in a chair, free of ideas like price, reviews or buzz. That link is you listening for no one but yourself and buying what you want to listen to the most.

E
erik_squires
@teo: "I like to remind people that math is an excellent tool, but to remember that math exists no where in the known universe except as that - in a human’s head." 

If it weren't for math, you wouldn't have a head.
stevecham"If it weren't for math, you wouldn't have a head.'

You appear to be worshipping at the wrong alter even the best math is not God or the Creator of Life you are a confused, disoriented, misinformed person.
Whoa! What is this - a convention of English majors?

In audio the most logical approach is to assume everything is true and nothing is true.

“Because it’s what I choose to believe.” Dr. Elizabeth Shaw, Prometheus
How about a little philosophy?

There are a variety of philosophical approaches to decide whether an observation may be considered evidence; many of these focus on the relationship between the evidence and the hypothesis. Carnap recommends distinguishing such approaches into three categories: classificatory (whether the evidence confirms the hypothesis), comparative (whether the evidence supports a first hypothesis more than an alternative hypothesis) or quantitative (the degree to which the evidence supports a hypothesis).[10] Achinstein provides a concise presentation by prominent philosophers on evidence, including Carl Hempel (Confirmation), Nelson Goodman (of grue fame), R. B. Braithwaite, Norwood Russell Hanson, Wesley C. Salmon, Clark Glymour and Rudolf Carnap.[11]

Based on the philosophical assumption of the Strong Church-Turing Universe Thesis, a mathematical criterion for evaluation of evidence has been conjectured, with the criterion having a resemblance to the idea of Occam’s Razor that the simplest comprehensive description of the evidence is most likely correct. It states formally, "The ideal principle states that the prior probability associated with the hypothesis should be given by the algorithmic universal probability, and the sum of the log universal probability of the model plus the log of the probability of the data given the model should be minimized."[12]

According to the posted curriculum for an "Understanding Science 101" course taught at University of California - Berkeley: "Testing hypotheses and theories is at the core of the process of science." This philosophical belief in "hypothesis testing" as the essence of science is prevalent among both scientists and philosophers. It is important to note that this hypothesis does not take into account all of the activities or scientific objectives of all scientists. When Geiger and Marsden scattered alpha particles through thin gold foil for example, the resulting data enabled their experimental adviser, Ernest Rutherford, to very accurately calculate the mass and size of an atomic nucleus for the first time. No hypothesis was required. It may be that a more general view of science is offered by physicist, Lawrence Krauss, who consistently writes in the media about scientists answering questions by measuring physical properties and processes.

Concept of scientific proofEdit

While the phrase "scientific proof" is often used in the popular media,[13] many scientists have argued that there is really no such thing. For example, Karl Popper once wrote that "In the empirical sciences, which alone can furnish us with information about the world we live in, proofs do not occur, if we mean by ’proof’ an argument which establishes once and for ever the truth of a theory".[14][15] 

Albert Einstein said: The scientific theorist is not to be envied. For Nature, or more precisely experiment, is an inexorable and not very friendly judge of his work. It never says "Yes" to a theory. In the most favorable cases it says "Maybe," and in the great majority of cases simply "No." If an experiment agrees with a theory it means for the latter "Maybe," and if it does not agree it means "No." Probably every theory will someday experience its "No" - most theories, soon after conception.[16]

I studied a formula for jitter and how it relates to human perception some years back.   I'd have to go look it up as I don't recall it exactly; the limiting number is related to the number of bits and the sample rate.   Increasing the number of bits and/or the sample rate makes it more critical.  The reason it is so audible is it affects the zero crossing of music, something to which the ear is especially sensitive.  The Redbook standard at 44KHz and 16 bits is less than 50 picoseconds.   Clearly, we have a ways to go to make jitter a nonissue.

I can tell you the reason we have jitter problems, besides the fact that the basic CD clocks are not all the accurate, is the sample clock is encoded in the data stream.   The clock is not a separate signal path from the data which makes jitter an inherent problem in the system.   Whether this was known or considered an issue when the CD system was originally conceived is a good question. 

When Sony designed the CD, a number of weaknesses were created in the design due to the size limitations of the CD.   Sony's president, whose name I have forgotten, wanted it to easily fit into a car stereo and also wanted Vivaldi Four Seasons to fit on a single disk without flipping it over or inserting another disk.   This set a limitation on the sample rate, not to an advantage, and the number of bits, also not to an advantage since they had to fit all the music onto a small platter.   The original concept was to have a CD the same size as an LP since the stores were already shelved and geared for that size. 

To be fair though, at the time the CD was designed, our technology and semiconductor processes were really pushed to develop a good quality, low distortion, inexpensive DAC at 16 bits and 44 KHz.   I believe 18 bits and 50 KHz was about the limit, given the cost limitations.   I sure wish we had that in a CD, though!

As for measuring jitter and tuning fork accuracy, we have time base standards that can easily resolve better than 1x10^-14 seconds - way beyond what a human can perceive.   They are pricey but they can do it.   Gosh, the digital time base standard I have on my bench, which I bought for RIAA measurements, measures to less than 1x10^-6 seconds and is still in calibration, and that was surplus at $50!