48kHz vs 96kHz: audible?


As a so-called audiophile, it is easy to lose one’s balance within many discussions and end up doubting, or at least questioning, whether that subtlety which one hears is real or imaginary.
 
Today, while engaged in a pastime, I was playing Holst’s "The Planets" in the background, but not at a low volume. I thought that it didn’t sound right. The strings in particular sounded a little abrasive. I noticed this on "Mars," the first composition, so it didn't take me long to perk up. On closer examination, I noticed that the DAC front panel was reporting 48kHz sample rate. I knew that this version of The Planets is 96kHz. Sure enough, JRiver Media Center (MC) was converting all PCM data (whether higher or lower) to 48kHz upon playback. I fixed the MC settings back so that all PCM rates play back at their native rates (up to the capability of my DAC), and all is well now.
 
Sometime in the recent past, whether due to an application or OS upgrade (of which there was one a few days ago), the MC sample rate conversion table got corrupted or reverted to a default configuration.
 
It would seem that I am able to hear the difference between 48kHz and 96kHz, at least under these circumstances. The difference was enough that I noticed it while passively listening (I was focused on drawing; the music was “background”) before I suspected a technical issue.

I wonder whether I could have heard this difference in a formal ABX test session? From my past experience with ABX testing, when the differences between the test objects are subtle, observations could easily have been obfuscated due to mental noise consisting of test anxiety, listening fatigue (to same passage over and over) and tedium. Whereas, in my case above, I noticed the difference when I was relaxed and focusing on something else entirely.

I am interested in thoughtful replies.

128x128mcdonalk

Very interested in the topic having spoken to a recording engineer friend of mine and having read what I can comprehend. I sort of understand now why hi res matters in the studio and the mastering. But I'm having trouble understanding how, if it's done properly in the studio at hi res, that there could be any audible difference between a release at 16/44.1 and a release at say 24/96 or even higher, again assuming the same master was used. For bit depth, wouldn't it only matter if there was a dynamic range that doesn't practically exist and for sample rate, wouldn't it only affect ranges we can't hear? I have read that the difference we think we hear is either a) that a different master was used for the hi res release and we're not really listening to the same recording when we compare CD to HR or b) it's all in our heads. My "general observation" in my reading is that people on the recording/studio side of things think hi res in the playback environment is not really a thing and people in the playback business think it's massively important.  I don't know the answers but willing to learn especially about the science/math behind it (less interested in people's listening experiences as I know they cover the full spectrum of opinion and we're all subject to bias whether we like it or not).

I had some songs in both 16/44.1 and 24/96 and there was a definite difference.  The 24/96 versions sounded smoother and more full/rich.  Anyway, that was my experience

…like i said…. check out 2L….recording engineer of renown runs the place…

But I’m having trouble understanding how, if it’s done properly in the studio at hi res, that there could be any audible difference between a release at 16/44.1 and a release at say 24/96 or even higher, again assuming the same master was used.

My understanding is that filters are used to eliminate noise from digital processing above 1/2 the sampling rate. For a 44.1khz sampling rate this puts the filtering pretty close to the audible range and can cause distortion in that range. 96khz puts the filtering at 48khz well above the audible range. Of course digital filtering is improving so any harm it does is probably lessening as time goes on.

I’m not a technical person so maybe someone who is can give a more accurate explanation of what is going on between 44.1 and hi-res.

 

From my experience I'd suggest any differences being heard are coming from the conversion process of altering bit rates rather than the rates.

In other words do not convert bit rates.

Perhaps there is some equipment that can do it without downgrading the sound quality.