speakers for 24/96 audio


is it correct to assume that 24/96 audio would be indistinguishable from cd quality when listened to with speakers with a 20khz 3db and rapid hi frequency roll-off?

Or more precisely, that the only benefit comes from the shift from 16 to 24 bit, not the increased sample rate, as they higher freq content is filtered out anyhow?

related to this, which advice would you have for sub $5k speakerset with good higher freq capabilities for 24/96 audio?

thanks!
mizuno
Kijanki - 20KHz reproduction with a 44KHz sampling rate is perfect for sine waves, not "coarse". A higher sampling rate doesn't improve accuracy within the frequency response of the lower rate, it just extends the frequency response. That doesn't mean I think digital recording and reproduction is perfect overall, it just means that in terms of capturing the frequency domain information at 20KHz, 44.1KHz sampling is completely sufficient to perfectly capture the sine waves. I think people confuse digital sampling with analog interpolation, and it isn't similar.
"capturing the frequency domain information at 20KHz, 44.1KHz sampling is completely sufficient to perfectly capture the sine waves"

Maybe sufficient for sinewaves but not for the music because it would call for brick wall filters that have very uneven group delays (non-linear phase if you prefer) and will cause wrong summing of harmonics. Such setup will be OK for single frequency reproduction but will be very unpleasant with music (dynamic signal).

Yes it is coarse because Nyquist-Shannon theorem requires infinite amount of terms (samples). Fixing it with sin(x)/x works poorly for short bursts around 1/2 of the sampling frequency. Sound of instruments producing continuous sound might be not affected (like flute) but anything with transients will sound wrong (piano, percussion instr. etc). Notice, that when people compare analog to 16/44 first thing they notice is different sound of the cymbals.

On the other hand, if you still think it is perfect system - enjoy.
"Maybe sufficient for sinewaves but not for the music because it would call for brick wall filters that have very uneven group delays (non-linear phase if you prefer) and will cause wrong summing of harmonics. Such setup will be OK for single frequency reproduction but will be very unpleasant with music (dynamic signal)."

I have no idea what you're talking about. The wrong summing of harmonics, and it'll be very unpleasant? I don't know about that, Kijanki. You talk like a technically competent person, but then you make these outlandish claims. So if these summed harmonics will be so screwed up, why is it that those of us with very good high frequency hearing and high quality speakers can't hear anything very unpleasant? And if they do sound so unpleasant, why when I listen to higher res stuff through a Benchmark DAC it doesn't sound noticeably better?

I think you're exaggerating the issues, and wrapping the arguments in technical-sounding reasons that really don't alter the music audibly.
I was trying to show that 16/44 recording wouldn't be a perfect process and that's why it is done in 24/192 but downsampling to 16/44 also takes away quality.

Digital reproduction (as well as analog) have limitations. Filtering screws up transient response and 16 bit resolution is less than perfect.

Why it is difficult to hear difference thru Benchmark? Possibly because available hi-res is often poorly made (many complains about that) while our systems and rooms have shortcomings.

Power amp might be limiting factor but it isn't as bad as Irvrobinson calculated. First of all S/N or THD+N of the amp is usually shown at 1W and many amps are better than 96dB. In addition we don't listen at 1W . For instance if we take Rowland 625 amp's S/N specification of 95dB at 1W at 8 ohm it will be higher at the output power of 300W. We might look as well at residual output noise specified by Rowland that is 55uV at 20Hz-20kHz unweighted. Since output voltage at nominal power of 300W is around 50V it makes S/N=119dB. SACD reproduction is roughly equivalent to 20/96 requiring dynamic range of 120dB. D/A converters are also limited to 20 bits performance.

So to answer original question - increasing resolution might be beneficial up to about 20 bits assuming good recording/file, system and room. Increasing rate will be always beneficial to avoid serious shortcomings I mentioned before.

I settled at standard redbook reproduction not only for practical reasons but also because I cannot stand hiss and pops of analog playback that don't allow me to forget I'm not sitting "there" at the concert.
And if they do sound so unpleasant, why when I listen to higher res stuff through a Benchmark DAC it doesn't sound noticeably better?

You can buy Tom Petty Mojo in CD or in HD and compare. There is a difference but most of the difference is due to audio compression applied to the CD master to make it "hot" - see CD loudness wars and what artists and producers try to do make the music to make it sell.

Basically they compress everything - especially drums - so that the dynamic range of peaks above RMS is usually no more than 6 to 10 db. Whereas a good recording in pop/rock may have 20 db peaks and a classical recording may have 30 db peaks above RMS.

The HD files - such as those on HD tracks are usually much less compressed than the 16/44.1 equivalents.