Ritteri writes:
What follows is a simplification, but you'll get the idea. Members, feel free to correct me as I'm only an enthusiastic amateur and am keen to learn.
In order to avoid "aliases" (byproduct of the sampling) when converting the original analogue signal to digital, no signal at half the sampling frequency must be present. Since the Redbook sampling frequency is 44.1kHz, this means that no signal must be present at 22.05kHz.
Let's say there was a signal at 24kHz. Sampling would produce an "alias" - an artifact - at 12kHz, which you can hear. Clearly we don't want this to happen. So the signal must be way down in level at 22.05kHz.
Yet, to have accurate reproduction to 20kHz (the nominal limit of human hearing), we want normal signal strength (whatever there is in the performance) at 20kHz.
So the signal must be passed through a very steep filter which is not affecting the signal at 20kHz, and is 90dB down at 22.05kHz. The famous "brick wall".
Oversampling attempts to overcome this problem. If the sampling frequency is (say) 88.2kHz, then we have to pass the signal through a filter that is flat at 20kHz and 90dB down at 44.1kHz. Still pretty steep and difficult to make without nonlinearities, but doable. Now we need a mathematical algorithm to choose (essentially) every second sample point and save the amplitude, thus making the digital recording.
Let's say the studio is recording digitally at 96kHz and 24-bit words. They make the recordings, mix it in the digital domain, and now they have to prepare it for the Redbook format. Lots of very funky mathematics to convert down to 16/44.1.
Consider now a studio recording in DSD. They make the recording, mix it (DSD mixers are more available now) and put that on the disc.
Similarly with DVD-Audio. The studio could record stereo at 192kHz, probably mix digitally at that resolution, and save this on the DVD using lossless compression.
In principle, both are superior to Redbook.
Regards,
I believe it can reproduce a perfect signal up to 22khz.That's incorrect.
What follows is a simplification, but you'll get the idea. Members, feel free to correct me as I'm only an enthusiastic amateur and am keen to learn.
In order to avoid "aliases" (byproduct of the sampling) when converting the original analogue signal to digital, no signal at half the sampling frequency must be present. Since the Redbook sampling frequency is 44.1kHz, this means that no signal must be present at 22.05kHz.
Let's say there was a signal at 24kHz. Sampling would produce an "alias" - an artifact - at 12kHz, which you can hear. Clearly we don't want this to happen. So the signal must be way down in level at 22.05kHz.
Yet, to have accurate reproduction to 20kHz (the nominal limit of human hearing), we want normal signal strength (whatever there is in the performance) at 20kHz.
So the signal must be passed through a very steep filter which is not affecting the signal at 20kHz, and is 90dB down at 22.05kHz. The famous "brick wall".
Oversampling attempts to overcome this problem. If the sampling frequency is (say) 88.2kHz, then we have to pass the signal through a filter that is flat at 20kHz and 90dB down at 44.1kHz. Still pretty steep and difficult to make without nonlinearities, but doable. Now we need a mathematical algorithm to choose (essentially) every second sample point and save the amplitude, thus making the digital recording.
Let's say the studio is recording digitally at 96kHz and 24-bit words. They make the recordings, mix it in the digital domain, and now they have to prepare it for the Redbook format. Lots of very funky mathematics to convert down to 16/44.1.
Consider now a studio recording in DSD. They make the recording, mix it (DSD mixers are more available now) and put that on the disc.
Similarly with DVD-Audio. The studio could record stereo at 192kHz, probably mix digitally at that resolution, and save this on the DVD using lossless compression.
In principle, both are superior to Redbook.
Regards,