Eldartford's sentence: "In Sean's explanation the second set of 20 dots in set B should not be random. Those dots should lie somewhere between the two dots adjacent to them".
is exactly correct. One possible location of "somewhere between" could be legitimately the midpoint. There is no problem with that at all. If the waveform looks smooth then what's the issue with that??? How, in the world, do you know that the waveform at this point in the CD is not supposed to be smooth?? There could be a consistently low volume passage or a consistently loud volume passage of 1 particular instrument that creates a smooth area. Entirely possible.
Anyway, the thing to remember in your 2nd example is that when you placed that "random" set of points, you were looking at the output of the digital estimation filter. The output of digital estimation filter is very deterministic & it is designer created. The o/p simply cannot be random - no way!! It lies "somewhere between" the actual sampled data points off the CD along a line determined by the algorithm of the digital estimation filter. This is that (digital) filter that creates all those signature sounds (like Wadia's house sound, Sim Audio's, dCS's, etc, etc) that many love & equally many hate.
In Eldartford's example, I think, that he used a smooth waveform only to illustrate the point. This is the way that it is usually introduced in DSP 101 classes. His particular example is pertains to oversampling. When he shows the repeating of numbers, he has considered a 12X oversampling & when he does the div-by-4, he is considering 4X oversampling. The div-by-4 most probably represents the digital FIR that follows any over (or up) sampling operation.
My only question here is why did the example consider an oversampling of 12X then later decimate to 4X?? Should have just started of with a 4X DAC. Anyway.....
You mentioned "error correction" for the 2nd time. Error correction in redbook CD playback has nothing to do w/ upsampling or oversampling. Error correction is NOT designed to correct the music written on the CD. It is designed to compensate for high-speed read & transmission of the bits where read errors will occur (owing to the high speed read operation). I think Eldartford's succinct explanation is exactly what error correction is all about. Any other idea of it is a mistaken impression.
I have read the recent upsampling verbose text by Moncrieff on IAR. IMHO, I have not read more bull**** anywhere that filled up so many pages. Very little of what he has written is correct. AFAIK, Moncrieff is very lost when it comes to up & oversampling. If you are taking your lessons from him, then I can see why you are mistaken too. Get hold of a DSP text (like Oppenheim & Schaeffer or Rabiner & Gold) & read that. You'll get the correct explanation of upsampling & oversampling.
is exactly correct. One possible location of "somewhere between" could be legitimately the midpoint. There is no problem with that at all. If the waveform looks smooth then what's the issue with that??? How, in the world, do you know that the waveform at this point in the CD is not supposed to be smooth?? There could be a consistently low volume passage or a consistently loud volume passage of 1 particular instrument that creates a smooth area. Entirely possible.
Anyway, the thing to remember in your 2nd example is that when you placed that "random" set of points, you were looking at the output of the digital estimation filter. The output of digital estimation filter is very deterministic & it is designer created. The o/p simply cannot be random - no way!! It lies "somewhere between" the actual sampled data points off the CD along a line determined by the algorithm of the digital estimation filter. This is that (digital) filter that creates all those signature sounds (like Wadia's house sound, Sim Audio's, dCS's, etc, etc) that many love & equally many hate.
In Eldartford's example, I think, that he used a smooth waveform only to illustrate the point. This is the way that it is usually introduced in DSP 101 classes. His particular example is pertains to oversampling. When he shows the repeating of numbers, he has considered a 12X oversampling & when he does the div-by-4, he is considering 4X oversampling. The div-by-4 most probably represents the digital FIR that follows any over (or up) sampling operation.
My only question here is why did the example consider an oversampling of 12X then later decimate to 4X?? Should have just started of with a 4X DAC. Anyway.....
You mentioned "error correction" for the 2nd time. Error correction in redbook CD playback has nothing to do w/ upsampling or oversampling. Error correction is NOT designed to correct the music written on the CD. It is designed to compensate for high-speed read & transmission of the bits where read errors will occur (owing to the high speed read operation). I think Eldartford's succinct explanation is exactly what error correction is all about. Any other idea of it is a mistaken impression.
I have read the recent upsampling verbose text by Moncrieff on IAR. IMHO, I have not read more bull**** anywhere that filled up so many pages. Very little of what he has written is correct. AFAIK, Moncrieff is very lost when it comes to up & oversampling. If you are taking your lessons from him, then I can see why you are mistaken too. Get hold of a DSP text (like Oppenheim & Schaeffer or Rabiner & Gold) & read that. You'll get the correct explanation of upsampling & oversampling.