speakers for 24/96 audio


is it correct to assume that 24/96 audio would be indistinguishable from cd quality when listened to with speakers with a 20khz 3db and rapid hi frequency roll-off?

Or more precisely, that the only benefit comes from the shift from 16 to 24 bit, not the increased sample rate, as they higher freq content is filtered out anyhow?

related to this, which advice would you have for sub $5k speakerset with good higher freq capabilities for 24/96 audio?

thanks!
mizuno

Hi Al and Shadorne.

Thanks for your thoughtful responses. Everything you guys said makes sense to me, but I do have some additional thoughts...

07-04-11: Almarg
1)He has apparently established that listeners can reliably detect the difference between a single arrival of a specific waveform, and two arrivals of that waveform that are separated by a very small number of microseconds. I have difficulty envisioning a logical connection between that finding, though, and the need for hi rez sample rates. There may very well be one, but I don’t see it.

I believe Kuncher addresses this in this document, in which he says:

For CD, the sampling period is 1/44100 ~ 23 microseconds and the Nyquist frequency fN for this is 22.05 kHz. Frequencies above fN must be removed by anti-alias/low-pass filtering to avoid aliasing. While oversampling and other techniques may be used at one stage or another, the final 44.1 kHz sampled digital data should have no content above fN. If there are two sharp peaks in sound pressure separated by 5 microseconds (which was the threshold upper bound determined in our experiments), they will merge together and the essential feature (the presence of two distinct peaks rather than one blurry blob) is destroyed. There is no ambiguity about this and no number of vertical bits or DSP can fix this. Hence the temporal resolution of the CD is inadequate for delivering the essence of the acoustic signal (2 distinct peaks).

In essence, I understand him to be saying that the temporal resolution of human hearing is around 6μs. But the temporal resolution of the 44.1 sampling rate is around 11μs. Since the temporal resolution of human hearing is better than the temporal resolution of 44.1 recordings, those recordings fail to accurately represent very brief signals that are both audible and musically significant. For example, Kunchur says:

In the time domain, it has been demonstrated that several instruments (xylophone, trumpet, snare drum, and cymbals) have extremely steep onsets such that their full signal levels, exceeding 120 dB SPL, are attained in under 10 μs…

He also suggests that the temporal resolution of 44.1 recordings might be inadequate to fully represent the reverberation of the live event:

A transient sound produces a cascade of reflections whose frequency of incidence upon a listener grows with the square of time; the rate of arrival of these reflections dN/dt ≈ 4πc3t2/V (where V is the room volume) approaches once every 5 μs after one second for a 2500 m3 room [2]. Hence an accuracy of reproduction in the microsecond range is necessary to preserve the original acoustic environment’s reverberation.

I’m not saying that these claims are true. I’m just trying to give you my understanding of Kunchur’s claims about the connection between human temporal resolution and the need for sampling rates higher than 44.1.

07-04-11: Almarg
2)By his logic a large electrostatic or other planar speaker should hardly be able to work in a reasonable manner, much less be able to provide good reproduction of high speed transients, due to the widely differing path lengths from different parts of the panel to the listener’s ears. Yet clean, accurate, subjectively "fast" transient response, as well as overall coherence, are major strengths of electrostatic speakers. The reasons are fairly obvious – very light moving mass, that can start and stop quickly and follow the input waveform accurately; no crossover, or at most a crossover at low frequencies in the case of electrostatic/dynamic hybrids; freedom from cone breakup, resonances, cabinet effects, etc. So it would seem that the multiple arrival time issue he appears to have established as being detectable under certain idealized conditions can’t be said on the basis of his paper to have much if any audible significance in typical listening situations.

I think perhaps Kunchur does his own view a disservice by emphasizing the deleterious time-domain effects of speaker drivers with large surface areas, e.g. electrostatic speakers. It seems to me that those deleterious effects might be offset to a large extent by the very characteristics you mention, viz., light mass, minimalistic crossover, etc.. But your objection does seem to cast doubt on the significance of the very brief time scales that Kunchur contends are audibly significant.

Having said that, the putative facts about jitter bear on this point in a somewhat paradoxical way. According to some authorities, such as Steve Nugent, jitter is audible at a time scale of PICOseconds. For example, Steve writes:

In my own reference system I have made improvements that I know for a fact did not reduce the jitter more than one or two nanoseconds, and yet the improvement was clearly audible. There is a growing set of anecdotal evidence that indicates that some jitter spectra may be audible well below 1 nanosecond.

That passage is from an article in PFO, which I know you are familiar with. I bring it up, not to defend Kunchur’s claims, but to raise another question that puzzles me:

If jitter really is audible at the order of PICOseconds, does that increase the plausibility of Kunchur’s claim that alterations in a signal at the order of a few MICROseconds are audible?

Again, I don’t quite know how to make sense of all this. I’d be interested to hear your thoughts.

Bryon
Well, Bryon, that was a very interesting article. I'm not sure what to think after reading it... is this yet another investigation into a micro-problem that doesn't really affect music reproduction, or is it a significant factor? I certainly don't know. I can't even venture a guess.

Anyway, Kunchur admits to listening to cassettes. I haven't heard cassettes for many years, but 16/44 CDs must sound like a revelation by comparison. ;-)
Hi Bryon,

Your question about the audibility of jitter that is on a time scale far shorter than the temporal resolution of our hearing is a good one. The answer is that we are not hearing the nanoseconds or picoseconds of timing error itself. What we are hearing are the spectral components corresponding to the FLUCTUATION in timing among different clock periods (actually, among different clock half-periods, since both the positive-going and negative-going edges of S/PDIF and AES/EBU signals are utilized), and their interaction with the spectral components of the audio.

For example, assume that the worst case jitter for a particular setup amounts to +/- 1 ns. The amount of mistiming for any given clock period will fluctuate within that maximum possible 1 ns of error, with the fluctuations occurring at frequencies that range throughout the audible spectrum (and higher). That is all referred to as the "jitter spectrum," which will consist of very low level broadband noise (corresponding to random fluctuation) plus larger discrete spectral components corresponding to specific contributors to the jitter.

Think of it as timing that varies within that +/- 1 ns or so range of error, but which varies SLOWLY, at audible rates.

All of those constituents of the jitter spectrum will in turn intermodulate with the audio data, resulting in spurious spectral components at frequencies equal to the sums of and the differences between the frequencies of the spectral components of the audio and the jitter.

If you haven't seen it, you'll find a lot of the material in this paper to be of interest (interspersed with some really heavy-going theoretical stuff, which can be skimmed over without missing out on the basic points):

http://www.scalatech.co.uk/papers/aes93.pdf

Malcolm Hawksford, btw, is a distinguished British academician who has researched and written extensively on audiophile-related matters.

One interesting point he makes is that the jitter spectrum itself, apart from the intermodulation that will occur between it and the audio, will typically include spectral components that are not only at audible frequencies, but that are highly correlated with the audio! He also addresses at some length the question of how much jitter may be audible.

So to answer your last question first, no, I don't think that the audibility of jitter on a nanosecond or picosecond scale has a relation to the plausibility of Kunchur's claim.

As far as point no. 1 in my previous post is concerned, yes I think that the quote you provided about closely spaced peaks being merged together does seem to provide a logical connection between his experimental results and a rationale for hi rez sample rates. It hadn't occurred to me to look at it that way. So that point would seem to be answered.

Best regards,
-- Al
Byron,

I appreciate your questions. You are definitely curious enough to look into this and I commend you on your interest.

However, poor Kunchur seems a very confused individual.

His test simply shows how two pure tones can interfere with eachother in a way that becomes audible. However, his conclusions are completely bogus. The listener is NOT hearing temporal time-domain effects of microseconds. The listener is actually hearing changes in the combined resultant waveform which has been altered by offsetting one source to the other (combined - meaning both waves and including all room reflections).

As I explained, this will lead to TOTAL destructive interference of the primary direct signal as heard by the listener at an offset of 2.5 CM. This is like a signal that is TOTALLY out of phase. The direct sound will be inaudible and all the listener hears is all the sound around the room (reflected sounds). Since we detect the direction of sound from the relative timing of the wave front (or nerve bundle triggers) across each ear then we lose that ability when a signal is out of phase.

Poor Kunchur is conflating things in a bad way - this is bad science.

However, his remarks about speaker alignment and panels are partly valid. It is almost certain that large radiating surfaces can cause the kind of interference at certain frequencies like what he achieved in this experiment. This manifests itself in a speaker response that has many suckouts across the frequency spectrum. In fact the anechoic response of a large panel response will look like a comb with many total suckouts across the frequency range. The result is that some sounds and some frequencies will not be as tightly imaged as with a point source speaker. Since most sounds are made up from many harmonics this effect will not be complete but on the whole it will lead to a larger more diffuse soundstage with some sounds imaging precisely and others more diffuse than when compared to a point source speaker. There is an audio tool called a flanger that is used for electric guitar - it achieves a similar effect but even stronger.

Also Jitter is not audible in the sense you describe. It is audible when non-random jitter over a great many 1000'sa and 100,000's of samples combines in a way that introduces new frequencies. We hear those new frequencies that are created by the non-random modulation of the clock (random jitter is just white noise at very low inaudible levels).

We are totally UNABLE to hear jitter effects on a few samples.
07-05-11: Almarg
...we are not hearing the nanoseconds or picoseconds of timing error itself. What we are hearing are the spectral components corresponding to the FLUCTUATION in timing among different clock periods...

That's what I suspected, Al, but I wasn't sure.

And thanks for your explanation of jitter. I was aware that jitter resulted in frequency modulation, but I didn't know that it was a kind of intermodulation distortion. Your explanation is much appreciated.

Shadorne - You may be right that Kunchur's methodology is flawed. I've read a few other experiments on human temporal resolution with similar methodologies, but my memory of them is a little vague. In any case, I have a question about your observation that "Some sample rates are noted for being better than others for reducing audible jitter." I'd be interested to hear a technical explanation for why that is the case.

Finally, I have a general question about high resolution audio that anyone might be able to answer:

My understanding is that the principal advantage of larger bit depth is greater dynamic range. What is the principal advantage of higher sampling rates, if it is not better temporal resolution?

Bryon