I find your premise interesting, but I need a bit more education about the recording process. Is the recording volume supposed to be calibrated at 83db @ 1000hz before every recording session? Therefore when played back at 83db it is then perfect to what the engineer heard?
What happens if the calibration in off? The tape clipped or saturated at 83db so the engineer backed off to 80db? Would recordings then be too loud played back at 83db and not true to the source? Are you re-creating the illusion of a live performance or the master tape? A solo singer at 83db may be true to the source, but at a different volume closer to the what a real person would sound like.
It appears from your long post you have thought this out completely and it seems to work for you. For other people I found an article by a recording engineer that explains some of the compromises made during the recording process (and I always wondered why I had to get up during a record to turn it up):
http://www.barrydiamentaudio.com/loudness.htm
"The history of the loudness wars can be traced back to the 1970s when vinyl mastering engineers started elevating the levels at the start of each side. This added to the initial impact of the sound as the record started to play. With vinyl, the amount of playback time available on one side of a record is directly related to how loud the record is cut. The louder the signal, the shorter the side. Since cutting the entire side at the elevated level would result in the available space running out before the music ended, the levels were cheated back down to "normal" after the first 30 seconds or so had elapsed.
The advent of the Compact Disc meant recording time was no longer related to recorded levels, so engineers could turn it up and leave it that way for the duration of the disc. Digital however brought its own limits to how loud the signal could be. Unlike analog tape and disks, which reached their overload (and hence distortion) point gradually as the level increased, digital has a maximum that can't be exceeded without resulting in gross distortion."
What happens if the calibration in off? The tape clipped or saturated at 83db so the engineer backed off to 80db? Would recordings then be too loud played back at 83db and not true to the source? Are you re-creating the illusion of a live performance or the master tape? A solo singer at 83db may be true to the source, but at a different volume closer to the what a real person would sound like.
It appears from your long post you have thought this out completely and it seems to work for you. For other people I found an article by a recording engineer that explains some of the compromises made during the recording process (and I always wondered why I had to get up during a record to turn it up):
http://www.barrydiamentaudio.com/loudness.htm
"The history of the loudness wars can be traced back to the 1970s when vinyl mastering engineers started elevating the levels at the start of each side. This added to the initial impact of the sound as the record started to play. With vinyl, the amount of playback time available on one side of a record is directly related to how loud the record is cut. The louder the signal, the shorter the side. Since cutting the entire side at the elevated level would result in the available space running out before the music ended, the levels were cheated back down to "normal" after the first 30 seconds or so had elapsed.
The advent of the Compact Disc meant recording time was no longer related to recorded levels, so engineers could turn it up and leave it that way for the duration of the disc. Digital however brought its own limits to how loud the signal could be. Unlike analog tape and disks, which reached their overload (and hence distortion) point gradually as the level increased, digital has a maximum that can't be exceeded without resulting in gross distortion."