It isn't the bits, it's the hardware


I have been completely vindicated!

Well, at least there is an AES paper that leaves the door open to my observations. As some of you who follow me, and some of you follow me far too closely, I’ve said for a while that the performance of DAC’s over the last ~15 years has gotten remarkably better, specifically, Redbook or CD playback is a lot better than it was in the past, so much so that high resolution music and playback no longer makes the economic sense that it used to.

My belief about why high resolution music sounded better has now completely been altered. I used to believe we needed the data. Over the past couple of decades my thinking has radically and forever been altered. Now I believe WE don’t need the data, the DACs needed it. That is, the problem was not that we needed 30 kHz performance. The problem was always that the DAC chips themselves performed differently at different resolutions. Here is at least some proof supporting this possibility.

Stereophile published a link to a meta analysis of high resolution playback, and while they propose a number of issues and solutions, two things stood out to me, the section on hardware improvement, and the new filters (which is, in my mind, the same topic):



4.2
The question of whether hardware performance factors,possibly unidentified, as a function of sample rate selectively contribute to greater transparency at higher resolutions cannot be entirely eliminated.

Numerous advances of the last 15 years in the design of hardware and processing improve quality at all resolutions. A few, of many, examples: improvements to the modulators used in data conversion affecting timing jitter,bit depths (for headroom), dither availability, noise shaping and noise floors; improved asynchronous sample rate conversion (which involves separate clocks and conversion of rates that are not integer multiples); and improved digital interfaces and networks that isolate computer noise from sensitive DAC clocks, enabling better workstation monitoring as well as computer-based players. Converters currently list dynamic ranges up to∼122 dB (A/D) and 126–130 dB(D/A), which can benefit 24b signals.

Now if I hear "DAC X performs so much better with 192/24 signals!" I don't get excited. I think the DAC is flawed.
erik_squires
Not really a curve-fitting but okay to think about it that way. In a digital representation, the spectra is reflected around 0Hz, and the sampling rate. Oversampling shifts the effective sample rate so that the base spectra (which does not change), shifts from being centered around 44.1Khz to centered around 384Khz. Being say only 20Khz wide, a digital filter can easily remove most artifacts over 20Khz, with a simple analog filter taking out the rest.
The other benefit is spreading out quantization noise reducing the noise floor.
@heaudio123

You said:

Not really a curve-fitting but okay to think about it that way.

Actually that's exactly how it works for upsampling, but different upsampling algorithms work differently. With the advent of cheap compute, Bezier curves are cheap and easy to do. 


Oversampling shifts the effective sample rate so that the base spectra (which does not change), shifts from being centered around 44.1Khz to centered around 384Khz. Being say only 20Khz wide, a digital filter can easily remove most artifacts over 20Khz, with a simple analog filter taking out the rest.


I didn’t say "oversampling."

I said "upsampling" and they are not the same thing, which is why your post is arguing against something that was not actually argued.

Please see this primer:

https://www.audioholics.com/audio-technologies/upsampling-vs-oversampling-for-digital-audio


Best,

E
From a purely technical standpoint oversampling can apply to DA conversion, not just ADC, so from that standpoint, you can use upsampling, oversampling or sample rate conversion to a higher frequency all interchangeably. Feel free to validate that with DAC data sheets that discuss oversampling.


But looking more at the (poorly) written paper linked attempting to compare a readily used term, over-sampling, to one practically made-up at least for this case (upsampling), and then to actually not really give any definition to upsampling except to define it pretty much exactly as asynchronous sample rate conversion, another well understood term, I am not surprised by the confusion.

erik_squires: "Actually that’s exactly how it works for upsampling, but different upsampling algorithms work differently. With the advent of cheap compute, Bezier curves are cheap and easy to do. "

I think you are missing a key element of how a typical asynchronous sample rate converter with inherent over-sampling works, namely that the first step would be an implementation of oversampling (typically fractional delay filters), which provides a smoother curve for the curve-fit which works over a smaller number of samples. Doing this keeps the spurious frequency components higher up allowing for easier final filtering.
Here’s an interesting article I ran across at Benchmark Media, I quoteth the relative part for this conversation:

An examination of converter IC data sheets will reveal that virtually all audio converter ICs deliver their peak performance near 96 kHz. The 4x (176.4 kHz and 192 kHz) mode delivers poorer performance in many respects.


The full article:

https://benchmarkmedia.com/blogs/application_notes/13127453-asynchronous-upsampling-to-110-khz

This again supports my hypothesis, that the converters themselves perform differently, it’s not just the data.
erik_squires
... with upsampling, you are not generating more data ...
Correct.
There’s no more clarity or resolution, or harmonics ...
Not necessarily, although if present, it would not be a consequence of more data, but more likely attributable to filtering, as others have noted.

Your mistake here is confusing correlation with causation, a common audiophile logical error.