psnyder149,
I need more time to consider all your numbers, but just for starters, I question the use of upsampling, which doesn't directly relate to my question. If you record something at 16 bits and also at 24 bits, then the latter has more gradations of amplitude. (There is more to psychoacoustically explain increased resolution than just amplitude variations. The sampling rate of 88 kHz vs 44 kHz may be a greater factor than the word length, or number of bits. Jitter--variation of timing of these samples--may be the most important. For simplicity, we can just discuss number of bits, for now).
The problem with upsampling is that it doesn't reveal more information. If something is recorded with 16 bits and upsampled to 24 bits, you don't get the information content of the original recording at 24 bits. Mere multiplication doesn't give the more sensitive gradations of the original 24 bit recording. You can't invent information (artificial resolution) by just magnifying the numbers. Similar reasoning about "oversampling" (if that is the right term) from 44 to 88 kHz, doesn't produce the information content of the original 88 kHz recording.
To start out with a 24 bit recording, you can truncate to 16 bits various ways. I agree that more numbers in the original 24 bit word can more accurately round down to 16 bits, than the original 16 bit recording. But I believe that upsampling from 16 to 24 bits won't produce the information content of the original 24 bit word, or even the rounded down 16 bit word.
I stand by my previous post, which doesn't directly relate to your discussion about upsampling. To me, the issue is whether a low level digital signal with profound loss of information from reducing bits down to only 3 or 7 in the examples I cited, is better or worse for resolution than the low level analog signal with the noise. The low level analog signal has a very high variation in amplitude, maybe like 30 bits of digital equivalent. If the analog hiss noise is reasonably low, far below the signal level, the analog signal can have greater resolution than the digital signal. But if there is lots of hiss, then analog is worse.
We probably agree that the best scenario is a 24 bit original recording, played back using a dac that accepts 24 bits. Then the 3 bits from the original 16 bit recording becomes 11 bits from the original 24 bit recording, and the 7 bits becomes 15 bits, yielding much better resolution. But if you start with a 16 bit CD recording, whether original or downsampled from the 24 bit master, you are left with only 3 or 7 bits at those low signal levels, pretty bad.