For the reasons cited, it is much more difficult to obtain a true balanced signal from an MM cartridge as compared to an MC one. Feeding a signal with an imbalance of noise on one phase vs the other to the balanced gain stage will result in the amplification of that noise, i.e., it will not be rejected because it is not identically present on both phases.
Lewm, this is a fair approximation of most of what I was saying. Maybe add to that . . . if the "balanced" input stage doesn't do a bangup job of eliminating magnetically-induced hum in the real world, then does it really need to be "balanced"? This is of course a question that each circuit designer will have to answer for him/herself. Right now, I'm personally leaning toward "no" . . . but it wouldn't suprise me if I changed my mind in the future.
Why is the noise to which you refer not similarly amplified by an SE topology? (It's that "3 db" boost of the noise that I don't quite get.) Noise is noise(?) In a good SE phono stage for MM, is the "ground" isolated from chassis or earth ground? That would seem to be a good idea.
Well, in this case "noise is not noise". The noise to be avoided in poor grounding is a result of the amplification of ground currents flowing across ground connections of finitely-low resistance (that is, everything except superconductors), and should be completely eliminated in a good design. This is what I am referring to in response to Axel's comment below.
But the 3dB minimum noise increase from an actively-realized "balanced" input stage is a different noise source altogether - here I am referring to the (mainly) thermal noise from the input semiconductors/tubes and their associated passive components. In a differential "balanced" input, there are double the number of devices, each producing their own uncorrelated noise, acting in series. When combined, they will produce double the noise voltage, which is 6dB higher. That can usually be offset by the fact that each side of the differential input stage is now looking at half the source impedance, which can reduce the noise by 3dB . . . so that's where the net 3dB noise increase comes from. This is a big reason why the vast majority of low-noise preamplifiers for ANY kind of low-impedance (hence low-noise-voltage) transducer is usually unbalanced in architecture. But IMO there can be plenty of logic to the idea of swallowing 3dB higher thermal noise (especially if the circuit is still very quiet), for improved hum rejection from a balanced input stage.
The inevitable and unavoidable 'ground contamination' influences of capacitors etc. that makes the other argument for differential/balanced vs. unbalanced.
Hi again Axel :) - There should be no such "ground comtamination", regardless, in a good circuit design and layout. This is "simply" a matter of the designer carefully analyzing the ground current flow in each part of the circuit, and understanding and considering the subtleties of such things as careful local bypassing, power-supply impedances, and ground-trace routing. But judging by many of the commercial products I see, this seems to be a particular challange, and differential-balanced circuits can sometimes be more forgiving of these sorts of faults.
Why? It is that the differential circuit also cancels even-order harmonics in the process of common mode rejection, so you wind up with a bias toward odd-order harmonics, and that is not so 'natural' to our ear.
I'm of the opinion that while low-distortion is only one of several required characteristics for a good-sounding circuit . . . I feel that in a high-quality phono preamplifier, ALL harmonic and IM distortion should be completely and totally buried in the noise floor, which in itself should be very low. Yes, low-order and even-order products are less disconcerting to the ear . . . but who wants any of it at all?