Let's say that full power corresponds to an SPL at the listening position in the vicinity of 110 db. In that situation -115 dbv at the amplifier inputs would result in an SPL at the listening position of -5 db, surely not audible.It's important to remember that the noise voltage from every noise mechanism from every part of every piece of electronics adds up, in the fashion of the square root of the sum of the squares. For the simplest analysis of a single opamp gain stage, there are seven:
1. Johnson noise from source impedance of non-inverting input
2. Non-inverting input's input noise voltage
3. Non-inverting input's input noise current, times its source impedance
4, 5, 6. Same as 1, 2, & 3 for the inverting input
7. Output "build-out" resistor.
The noise voltages from 1 thru 6 are multiplied by the circuit's noise gain (usually simplified to be the circuit's closed-loop voltage gain), and then added together (square-root-of-the-sum-of-the-squares). Any one of these sources by themselves is a pretty small number, but every single one of them (from every stage) contributes to the final result.
In the real world, optimising noise performance usually boils down to choosing the input device(s) and their operating parameters so that their voltage noise/current noise characteristics fit the input source impedance, then going through everything else and shaving it down a few dB at a time . . . and when it's done properly, it's the noise of the preceding device or transducer that dominates.
So with my previous attenuator example, it's not the absolute number that matters . . . it's the fact that we've taken the previously insignificant noise mechanism of amplifier source impedance and turned it into a huge one. If we use the common expression of Noise Figure (NF) . . . (the difference between an ideal amplifier and the real one for a given source impedance), sticking the attenuator in the back can change a good amp's NF (for a 150-ohm source) from i.e. 6dB to 24dB! I think in the majority of cases this will be instantly noticable with in a quiet room with no source playing and one's ear near the speaker. But at the very least it seems awfully ham-handed to instantly nullify all the hard engineering work it takes to build a low-noise power amplifier.
But isn't it generally considered to be sonically preferable for the preamp's volume control to be operated at higher points within its range, rather than at lower points, to minimize the sonic effects of the volume control mechanism itself?In general, I would say no, and definately not from a noise perspective. The possible exception is if the volume control is operated in such a low range where channel-balance and wiper contact resistance is an issue. But in a well-designed conventional (input-pot followed by active stage) line preamp, the Johnson noise from the volume control is the dominant noise source. And it's output impedance increases as the volume is turned up until it reaches it's maximum value at -6dB (plus electronic gain).
Also, keep in mind that all the noise we've discussed is "post-fader" . . . that is, it's unattenuated by the volume control. So from a noise standpoint, the best way to reduce a conventional preamp's gain by passive, resistive means (assuming 12dB, and keeping input/output impedances the same) is to reduce the value of the volume pot to one-quarter of what it was, and insert a series resistor with the input to bring the impedance back up. Now, all the resistor's noise is attenuated along with the signal, and the active gain stage sees a lower source impedance to boot. But if it's the active stage that's noisy . . . This of course won't help.