Not broken, a design decision to go NOS without a filter.
Its broken in the sense that such a design decision breaks the theory of digital audio. Just as a design decision not to band-limit the input breaks the theory.
Basic technical question about digital source signals
such a design decision breaks the theory of digital audio.Not really, it presumes that subsequent filters (mechanical limits in your speakers and your ear, which respond per F=MA; inherent limits in subsequent components) achieve he filtering. There will be no aliasing after the DAC. Yes, there could be HF noise residue; but hearing is highly attenuated above 22 kHz (if present at all) anyway. Just as a design decision not to band-limit the input breaks the theory. That would truly violate Nyquist's paper.Without band limiting various forms of aliasing and their effects can occur. Really quite different |
Not really, it presumes that subsequent filters (mechanical limits in your speakers and your ear, which respond per F=MA; inherent limits in subsequent components) achieve he filtering. There will be no aliasing after the DAC. Yes, there could be HF noise residue; but hearing is highly attenuated above 22 kHz (if present at all) anyway.Any presumptive filters won't be ones that do it by the book. HF noise residue most certainly will be present in the absence of an anti-imaging filter. I would agree that subjectively, having images present is far, far preferable to having aliases. |
I think there would have to be a "theory of digital audio" at least on reconstruction to break it. Of course, it also presumes that the subsequent filters are linear. itsjustme260 posts11-16-2020 9:13amsuch a design decision breaks the theory of digital audio.Not really, it presumes that subsequent filters (mechanical limits in your speakers and your ear, which respond per F=MA; inherent limits in subsequent components) achieve he filtering. There will be no aliasing after the DAC. Yes, there could be HF noise residue; but hearing is highly attenuated above 22 kHz (if present at all) anyway. |
Read about Nyquist’s theorem. Today’s samplers are even higher. Philips (and others) made a study of this during the 70s before the first CD’s were defined and introduced, and used thousands of people in listening experiments to reach a conclusion. No human can tell the difference between a properly sampled and then smoothed signal than an analog one. When the first CDs were introduced, neither the recording nor the playback equipment was as accurate as today. But in the past 10 years or so, the accuracy of ADCs and DACs have improved to an incredible point where noone will be able to tell the difference from the analog signal. Extremely accurate instrumentation MAY be able to show a difference at a level hundreds of times more sensitive than the human ear. But that is a moot point since 99.9% of the humans will not be able to detect even one tenth of that difference. Some on this thread will not accept what I am saying. They fall into that 0.1% category or are from another planet :-) |