16 bits specifies 65,536 levels, approximately 0.00146 decibel per level. I've read conflicting accounts. What's the smallest level change the human ear is capable of discerning?
- ...
- 242 posts total
16 bits specifies 65,536 levels, approximately 0.00146 decibel per level.Actually it shouldn't be looked at that way, because the CD medium uses linear encoding, while the db scale is logarithmic. For example, the difference in db between the maximum possible value as represented digitally (65,535 as expressed in decimal form) and one "level" less than that (perhaps more properly expressed as one LSB increment less than that, referring to the voltage increment corresponding to the Least Significant Bit) is: 20log(65535/65534) = 0.000132 db In contrast, the difference in db between a value of 1 LSB increment above zero and a value of 2 LSB increments above zero is: 20log(2/1) = 6.02 db. So the number of "db per level" varies very widely depending on the specific levels that are being considered. What's the smallest level change the human ear is capable of discerning?What matters in this context is not the perception of changes in level, but our ability to perceive, among other things, differences in the RELATIVE amplitudes of the harmonics and other spectral components that collectively constitute a note. Our hearing mechanisms are far more sensitive to those kinds of differences, which affect timbre for one thing, than they are to simple volume changes. Speaking more generally, many of the recent posts in this thread have been excellent, IMO. Too many to cite individually. Ralph's point about analog hiss being much less objectionable than its digital counterpart (quantization noise resulting from the limited number of bits per sample), is a very good one of course. It should be mentioned, though, that careful application of dither in the digital recording process can go a long way toward minimizing that issue. And along the lines of Ralph's comment, careful "normalization" of volume levels during the recording process can minimize or eliminate the extent to which bits are sacrificed as a result of, for instance, overly conservative headroom allowances. It should also be pointed out that the 110db or so of dynamic range that a high quality analog tape machine may be able to provide is considerably greater than what can be put onto and retrieved from vinyl, as well as being more than what can be supported by most listening environments, and more than what is required by most music. A bottom-line point, IMO: Although I haven't yet gotten into hi rez, my suspicion, given how good SOME redbook cd's can sound, is that 24/192, if well implemented in both the recording and playback parts of the chain, should be good enough to deliver digital's full potential, in terms of perceived accuracy (which is the subject of the thread). On the other hand, the subjective preferability of that potential vs. vinyl at its best is another question altogether, about which opinions will obviously differ. Best regards, -- Al |
24/192 would be a safe bet i would say, an insurance policy perhaps more than anything, with a price. 16/44 is pretty good...good enough for most but does cut it somewhat close at least for younger better ears and technology today is capable of better. my dac cannot do 24/192 but can handle some lesser high rez formats. i need to give hd tracks a try. |
Al, Ralph, How about DSD? In a direct transfer --no remixing or remastering-- of an AAA, 1/2", half-track, 15/30 ips reel master...which product/process, Redbook CD or DSD64, would yield the nearest fidelity to the source/master tape? Would the answer change if the evaluation were made in a 64-bit DAC and DSD128 environment? Thanks for your consideration, Sam |
- 242 posts total