Upsampling. Truth vs Marketing


Has anyone done a blind AB test of the up sampling capabilities of a player? If so what was the result?

The reason why I ask because all the players and converters that do support up sampling are going to 192 from 44.1. And that is just plane wrong.

This would add huge amount of interpolation errors to the conversion. And should sound like crap, compared.
I understand why MFG don't go the logical 176.4khz, because once again they would have to write more software.

All and all I would like to hear from users who think their player sounds better playing Redbook (44.1) up sampled to 192. I have never come across a sample rate converter chip that does this well sonically and if one exist, then it is truly a silver bullet, then again....44.1 should only be up sample to 88.2 or 176.4 unless you can first go to many GHz and then down sample it 192, even then you will have interpolation errors.
izsakmixer
Bombay: Your own description answers the problems that you questioned i.e. "On a historical note, Philips is the co. that is to be credited or discredited with the concept of upsampling. The original idea at Philips Reasearch Labs was to somehow get that analog filter order lower & that transition band less steep. In the original redbook spec, the transition band is 20KHz-22.05KHz. Upsampling was the answer from an engineering perspective & from a cost prespective. They really didn't care about the sonic effects back then."

By playing games with the actual cut-off frequency and Q of the filtering OR by removing the majority of filtering, you reduce the amount of roll-off, phase shift and distortion in the treble region. As far as oversampling and error correction goes, that simply equates to more tampering that the machine itself is doing with the signal and / or noise that it is generating within the power supply and support circuitry.

In effect, error correction is "somewhat" like negative feedback. As such, Audio Note feels that small errors aren't as much of a negative as the problems that result from trying to correct them. Between the lack of oversampling and their approach to filtering, many people seem to agree with the sonic results that they've achieved. As a side note, Moncrieff covered error correction in IAR many years ago. Sean
>

Mathematically, there are no differences between upsampling and oversampling. Upsampling is basically a marketing term and it is NOT coincidental that it was conjured up during the redbook lull prior to DVD-A format agreements. Really, what is so special about 96kHz or 192kHz?? Why not 88.2kHz or 176.4kHz? For that matter, why not 352.8kHz or 705.6kHz? The choice of resampling a 44.1kHz signal to 96kHz or 192kHz is entirely about piggy-backing on the new high rez formats for marketing purposes. In fact, there is potential for loss of information by resampling assymetrically rather than by integer multiples.

Please refer to Charles Hansen (Ayre) or Madrigal, or Jeff Kalt (Resolution Audio), or Wadia, or Theta. All have made multiple statements that upsampling is nothing more than a marketing tool. Maybe it's good for high end in this sense...certainly high end redbook CD sales jumped after the "upsampling" boom. Magazine reviewers seemed eager to turn a blind eye since their livelihood depended on a healthy high-end market. Waiting 2-5 years for decent universal players certainly wasn't attractive, nor would reviewing the latest $20k redbook CD player when the general consensus at the time was that even bad high rez would blow away great redbook.
Phillips used 4 times oversampling in their first CD players so that they could achieve 16 bit accuracy from a 14 bit D/A. At that time, 16 bit D/A, as used by Sony, were lousy, but the 14 bit units that Phillips used were good. The really cool part of the story is that Phillips didn't tell Sony what they were up to until it was too late for Sony to respond, and the Phillips players ran circles around the Sony ones.

In Sean's explanation the second set of 20 dots in set B should not be random. Those dots should lie somewhere between the two dots adjacent to them.

Here is my explanation.

Assume there is a smoothly varying analog waveform with values at uniform time spacing, as follows. (Actually there are an infinite number of in-between points).

..0.. 1.. 2.. 3.. 4.. 5.. 6.. 7.. 8.. 9.. etc

If the waveform is sampled at a frequency 1/4 that of the example, (44.1 KHz perhaps) the data will look like the following:

..0.......... 3.......... 6...........9..... THIS IS ALL THERE IS ON THE DISC.

A D/A reading this data, at however high a frequency, will output an analog "staircase" voltage as follows:

..000000000000333333333333666666666666999999999

But suppose we read the digital data four times faster than it is really changing, add the four values up,
and divide by 4.

First point……..(0+0+0+3)/4 = 0.75
Second point…. (0+0+3+3)/4 = 1.5
Third point…… (0+3+3+3)/4 = 2.25
Fourth point….. (3+3+3+3)/4 = 3.0
Fifth point……. (3+3+3+6)/4 = 3.75
Sixth point……. (3+3+6+6)/4 = 4.5
Seventh point…. (3+6+6+6)/4 = 5.25
Eighth point…… (6+6+6+6)/4 = 6
….And so on

Again we have a staircase that only approximates the instantaneous analog voltage gererated by the microphone when the music was recorded and digitized, but the steps of this staircase are much smaller than the staircase obtained when the digital data stream from the disc is only processed at the same rate that it was digitized at. The smaller steps mean that the staircase stays closer to the original analog ramping signal.

Note also that we are now quantized at 0.25, instead of 1, which is the quantization of the data stream obtained from the disc. A factor of 4. That’s like 2 bits of additional resolution. That’s how Phillips got 16 bit performance from a 14 bit D/A.
The term "Error Correction" applies to a scheme where redundant data is combined with the information in such a way that a decoding algorithm can recover the original information WITHOUT ANY LOSS, provided that the number of transmission errors, and their distribution in time, does not exceed what the encoding algorithm is designed to deal with. This is not a "bandaid" for poor transmission. It is a way to make it possible to run the hardware at much higher bandwidth because errors can be alowed to occur.

"Interpolation" is not "Error Correction". Interpolation is what you can do if the errors do exceed what your algorithm is designed to deal with. Depending on what the signal is doing at the time that transmission glitches occur interpolation may or may not result in significant error in recovery of the information.
Thanks for the feedback Sean. Putting your orig. & 2nd post together, I see what you were trying to say.