Way I understand ALAC is that it is 100% lossless, but runs a compression algorithm to reduce file size. Think a self executing zip file that automatically unzips itself in RAM every time you play it, while the disk copy remains compressed. FLAC, I think, is similar, but with a variable, user-selectable compression ratio (from 0 to more), whereas ALAC is the Apple version, which invariably means two things: compression ratio is fixed because Apple picks it for you and FLAC not supported on Apple. (And by "fixed" I mean not user-adjustable, I think the compression ratio on ALAC is very variable, just adjusted in secret by the Apple tech boffins and their sneaky algorithms depending on the nature of the track. Again, think zip files depending on the nature of the file and density, it will be more or less compressable).
The only theoretically possible source of sonic difference between ALAC and an uncompressed lossless format (AIFF / WAV) is the processor burden of uncompressing the ALAC files every time you access them. Once upon a time, that may have mattered more, but with current processors I expect it really doesn't. (Although folks do still argue about it.)
Finally, the explanation for different bitrates on lossless files that has made the most sense to me is that all the bitrate represents is filesize divided by length of the track (kbps, kb/s, to be precise). So, if you compress a track, making the file size smaller, it will read as having a lower bitrate as against the same track uncompressed. But lossless / bit perfect is just that, compression or no, so the bitrate stat for a lossless track is really pretty much meaningless. Put differently, track density or complexity may well have an impact on the compression efficiency and ratio, so there's your source of variability, but all bitrate represents is that result (compressed file size) divided by length. Nothing to lose sleep over.
Flip side, when you pick a bit rate as the controlling factor for track compression rather than anything to do with the track itself -- its kinda like the tail wagging the dog. Youve imposed an arbitrary variable, mashed the track through it, and then discarded everything else so that your pre-determined kb/s = X equation comes out to equal X. Whats left is only what wasnt lost, ie deleted, in order to make this arbitrary number (and hence the opposite of lossless). This, in turn, is I expect why variable bit rate makes sense, cause it gives the algorithms discretion (flexibility?) to take the nature and complexity of the track into account, from second to second adjusting bit rate in accordance with what is actually going on, instead of just hacking and slashing to make an arbitrary number but in such a way to come out with an average X result to meet the predetermined outcome. (OK, that last one was pure guesswork.) So, for ALAC, bitrate is an arbitrary number, but a meaningless arbitrary number. While, for a loss-y format, bit rate is still an arbitrary number, but a fantastically meaningful one because it is what was chosen to determine how much of the original lossless file survived.
The only theoretically possible source of sonic difference between ALAC and an uncompressed lossless format (AIFF / WAV) is the processor burden of uncompressing the ALAC files every time you access them. Once upon a time, that may have mattered more, but with current processors I expect it really doesn't. (Although folks do still argue about it.)
Finally, the explanation for different bitrates on lossless files that has made the most sense to me is that all the bitrate represents is filesize divided by length of the track (kbps, kb/s, to be precise). So, if you compress a track, making the file size smaller, it will read as having a lower bitrate as against the same track uncompressed. But lossless / bit perfect is just that, compression or no, so the bitrate stat for a lossless track is really pretty much meaningless. Put differently, track density or complexity may well have an impact on the compression efficiency and ratio, so there's your source of variability, but all bitrate represents is that result (compressed file size) divided by length. Nothing to lose sleep over.
Flip side, when you pick a bit rate as the controlling factor for track compression rather than anything to do with the track itself -- its kinda like the tail wagging the dog. Youve imposed an arbitrary variable, mashed the track through it, and then discarded everything else so that your pre-determined kb/s = X equation comes out to equal X. Whats left is only what wasnt lost, ie deleted, in order to make this arbitrary number (and hence the opposite of lossless). This, in turn, is I expect why variable bit rate makes sense, cause it gives the algorithms discretion (flexibility?) to take the nature and complexity of the track into account, from second to second adjusting bit rate in accordance with what is actually going on, instead of just hacking and slashing to make an arbitrary number but in such a way to come out with an average X result to meet the predetermined outcome. (OK, that last one was pure guesswork.) So, for ALAC, bitrate is an arbitrary number, but a meaningless arbitrary number. While, for a loss-y format, bit rate is still an arbitrary number, but a fantastically meaningful one because it is what was chosen to determine how much of the original lossless file survived.