How Science Got Sound Wrong


I don't believe I've posted this before or if it has been posted before but I found it quite interesting despite its technical aspect. I didn't post this for a digital vs analog discussion. We've beat that horse to death several times. I play 90% vinyl. But I still can enjoy my CD's.  

https://www.fairobserver.com/more/science/neil-young-vinyl-lp-records-digital-audio-science-news-wil...
128x128artemus_5
Humans can generally hear a ’one inch’ shift of the position of the phantom between the speakers ’ping’ sound.

This equates to a perfected zero jitter timing change of 1/100,000th of a second. Which in Nyquist terms, means a clock and signal rate of at least 225khz, with zero jitter.


Yeah, and this is probably being really, really conservative.

I have over the years learned the most efficient speaker setup, in my room anyway, is to measure from the corners of each speaker to the side and front walls. Its all set up and fine-tuned first by ear of course, but then once that is done out comes the tape measure. Real handy since if they get jostled vacuuming, laying down to clean connections, or whatever, its real easy putting them exactly back where they were, no guessing, no doubt.

So anyway what I have learned over the years, move even just one speaker even as little as 1/8" and the imaging starts to go. Sad to say how many so-called audiophiles roll their eyes at this. Well, too bad. Its their loss. Whatever you think you have, unless you are dead on, just that one (free!) tweak alone and it will be better.

So one inch to me is a gross error. One inch is so far off I would hear it in an instant. Something a smart-a-- co-worker unintentionally proved one night when he tried to prank me by moving things. By about one inch. I heard it - and figured out what it was and fixed it - so fast (under a minute!) he could not believe it.

So do the math on that one, probably be in the nano-seconds. Whatever. The fact that people can hear a billionth of a second of jitter starts to make a lot more sense when you look at it this way.
Great article and it makes sense to me. While I do enjoy both formats I do enjoy vinyl more...
This year is coming to an end.

Is it time to start submitting "Post of the year" nominations?

This has to be one of the strong contenders.

"Do yourself a favor. Skim right past the loser wannabes - above and to follow, as night follows day- and appreciate those like me who thank you for posting this brilliant article."
This is not even a post. This is literature.
Microtime, as the article envisions it, is not a thing.

Interferometry and head/ear related comb filtering (i.e. HRTF) is.

1/44,100 is the sampling rate, not the precision of CD playback. 
Neurons are not single gates. They integrate of multiple conditions over time. 
Again, you can like Vinyl, but the article quoted by the OP won't stand up to much scrutiny.
I think we should be asking the question, "How many samples per waveform are required to reduce the RMS error to below, say, 5%, which is the sort of error achieved during the heyday of the vinyl years?"

Some types of error may be more or less objectionable, but let's start simple. Let's just find out how much RMS error there is for a given sampling scheme.

Surprisingly enough, it's not that hard to calculate. But shockingly, nobody seems to bother.

To calculate, begin with observing that the Fourier theorem shows that all periodic functions are built up as a sum of sine waves, so that to consider music, all we have to consider are sine waves (aka pure tones). Further, it is not hard to compute the difference between a sine wave and its sampled value at any point, for any fixed number N of samples per wave form. You can approximate by just slicing the waveform into N intervals and then calculating the difference at the midpoint of each interval.

It is also easy to square these differences and add them up. You could use calculus, but the above is an adequate approximation.

That is the essence of a computation yielding the RMS error of the sampling scheme per waveform.

Returning to our question, the answer I get is 250 samples per waveform for step-function decoding. At 20 KHz, that means sampling at 5MHz - with infinite precision, of course.

Exotic decoding algorithms can improve on this for pure tones, but how well do they work for actual music? I doubt if anyone knows - certainly I've never seen discussed, even the first question about samples per waveform. I think we should.