@nonoise
16Bit has its noise floor limit at -96dB (it still can have audio embedded lower than that level). Almost all music is mastered to 0dBFS (clipping) is 105dBC, this your room must have a noise floor lower than 9dBC, even the most optimistic values for orchestral/classical recordings aren’t mastered higher than 120dBC, in which the room noise floor would have to be lower than 24dBC.
My living room is open-concept, so a bit noises than normal, but it’s noise floor is about 46dBC. In terms of speaker wattage, a difference of 22dB (46-24) is the same as a speaker being fed 1W vs 160W, it’s a staggering abount or difference, I don’t know any residential rooms that quiet.
Also keep in mind those mastering values are for only that genre of content (and not even the whole genre, only a portion) and most people don’t listen at reference levels (like for movies I’m usually -8dB or -12dB below reference). It also isn’t taking into account that no meaningful data is that low, especially when music >70dB louder is being played, masking it
So no, you won’t hear any benefit going from 16Bit to 24Bit.
Oh, and also remember I didn’t even talk about noise-shaped dither, which can make a 16Bit signal have a noise floor of like 105dB-120dB. So again, 16Bit is enough, and further showing that things like jitter have been a non-issue for many years, even Apple’s USB-C dongle DAC has a Jitter-Test of better than -110dB. Now, not saying no modern DACs are immmune, but any competent one (even the $9 Apple dongle) has no audible issues with jitter.