Upsampling. Truth vs Marketing


Has anyone done a blind AB test of the up sampling capabilities of a player? If so what was the result?

The reason why I ask because all the players and converters that do support up sampling are going to 192 from 44.1. And that is just plane wrong.

This would add huge amount of interpolation errors to the conversion. And should sound like crap, compared.
I understand why MFG don't go the logical 176.4khz, because once again they would have to write more software.

All and all I would like to hear from users who think their player sounds better playing Redbook (44.1) up sampled to 192. I have never come across a sample rate converter chip that does this well sonically and if one exist, then it is truly a silver bullet, then again....44.1 should only be up sample to 88.2 or 176.4 unless you can first go to many GHz and then down sample it 192, even then you will have interpolation errors.
izsakmixer
Since Sean has confessed his error, I will do the same. My explanation actually showed a ramping signal of 3 units in four samples. While this was not incorrect, it is not consistent with the analog signal that I assumed at the beginning. The following is an updated version of my explanation, for posterity.

Phillips used 4 times oversampling in their first CD players so that they could achieve 16 bit accuracy from a 14 bit D/A. At that time, 16 bit D/A, as used by Sony, were lousy, but the 14 bit units that Phillips used were good. The really cool part of the story is that Phillips didn't tell Sony what they were up to until it was too late for Sony to respond, and the Phillips players ran circles around the Sony ones.

In Sean's explanation the second set of 20 dots in set B should not be random. Those dots should lie somewhere between the two dots adjacent to them.

Here is my explanation.

Assume there is a smoothly varying analog waveform with values at uniform time spacing, as follows. (Actually there are an infinite number of in-between points).

..0.. 1.. 2.. 3.. 4.. 5.. 6.. 7.. 8.. 9.. 10. 11. 12 etc.

If the waveform is sampled at a frequency 1/4 that of uniform time spacing of the example, (44.1 KHz perhaps) the data will look like the following:

..0............... 4.............. 8...............12..
THIS IS ALL THERE IS ON THE DISC.

A D/A reading this data, at however high a frequency, will output an analog "staircase" voltage as follows:

..000000000000000004444444444444444488888888888888812

But suppose we read the digital data just four times faster than it is really changing, add the four values up,
and divide by 4.

First point……..(0+0+0+4)/4 = 1
Second point....(0+0+4+4)/4 = 2
Third point.....(0+4+4+4)/4 = 3
Fourth point....(4+4+4+4)/4 = 4
Fifth point.....(4+4+4+8)/4 = 5
Sixth point.....(4+4+8+8)/4 = 6
Seventh point...(4+8+8+8)/4 = 7
Eighth point....(8+8+8+8)/4 = 8
....And so on

Again we have a staircase that only approximates the instantaneous analog voltage gererated by the microphone when the music was recorded and digitized, but the steps of this staircase are much smaller than the staircase obtained when the digital data stream from the disc is only processed at the same rate that it was digitized at. The smaller steps mean that the staircase stays closer to the original analog continuously ramping signal.

Note also that we are now quantized at 1, instead of 4, which is the quantization of the raw data stream obtained from the disc. A factor of 4. That’s like 2 bits of additional resolution. That’s how Phillips got 16 bit performance from a 14 bit D/A.
The Esoteric DV-50 is another player that does not use 96K, 192K, or 384K. The first upsampling point on the DV-50 is 352.8K, and the higher selections continue to follow that pattern.
Hmmm... I'm surprised that nobody jumped all over me for stating the obvious. That is, digital is a poor replication of what is originally an analogue source.

I'm also glad to see that nobody contradicts the fact that having more sampling points can only improve the linearity of a system which is less than linear to begin with. After all, if digital was linear, we could linearly reproduce standardized test tones. The fact that we can't do that, at least not as of yet with current standards, would only lead one to believe that analogue is still a more accurate means of reproducing even more complex waveforms.

Converting analogue to digital back to analogue again only lends itself to potential signal degradation and a loss of information. One would think that by sampling as much of the data as possible ( via upsampling above the normal sampling rate ), that one would have the greatest chances for better performance with a reduction the amount of non-linearities that already exist in the format. Evidently, there are those that see things differently. Sean
>
Sean, thank you for all the diagrams, and patient tutoring. It all makes logical sense, sure.

If I had never heard the Audio Note, I would be looking at one of the top up sampling players on the market. Oh well.
More corrections! They don't affect the basic idea, but could easily confuse people. Sorry about that. Hopefully this is it.

If the waveform is sampled at a frequency four times that which corresponds to the uniform time spacing of the example, (44.1 KHz perhaps) the data will look like the following:

Note also that we are now quantized at 1/4, (0+0+0+1)/4 ,instead of 1, which is the quantization of the raw data stream obtained from the disc. A factor of 4. That’s like 2 bits of additional resolution. That’s how Phillips got 16 bit performance from a 14 bit D/A.

OK Sean...Sorry you felt left out because no one jumped all over you. The following is my modification of your statement.

Some digital representations of analog (analogue in England) waveforms are a poor replication of the analog source because they lack the resolution (bits) and sampling rate appropriate for the bandwidth of the signal. Inaccuracy is not inherent to the digital format, but represents a design decision regarding what level of error is acceptable.