Which Mono Cartridge at around $1,300.00?


I'm in the process of upgrading my well cared for Thorens TD145. I started by soldering in WireWorld phono cable along with getting a basic tune up. I want to replace my Grado ME+ mono cartridge with a substantially better mono cartridge. Currently, the tone arm is stock. My records are classical (orchestral, chamber, vocal, etc...) dating from the 1940's and 1950's so I've been cogitating on the Ortofon SPU Mono GM MKII or a low output Grado (i.e. the sonata reference 1). My phono stage is the ASR Mini Basis Exclusive. All or any suggestions would be greatly appreciated.
Thanks!
128x128goofyfoot
05-01-12: Lewm
The factors that determine the need for a SUT are (i) cartridge signal voltage output, and (2) phono stage gain. Period. If you have enough of (1) and (2), you don't need a SUT.
Lew, as I'm sure you realize but others may not, phono stage signal-to-noise ratio is also a factor. I doubt that would be an issue with the ASR Mini Basis Exclusive, but it very well could be with some phono stages. All gains of 60 db, for example, are not created equal, and different 60 db phono stages will produce differing amounts of background hiss. And a 60 db gain stage will sometimes be significantly more noisy than a 20 db SUT used with a 40 db gain stage.

Further complicating matters is the fact that s/n ratio specifications from different manufacturers often can't be compared directly, because they may be based on different reference levels and different frequency weightings, with those levels and weightings not even being indicated in many cases.

As I say, that is most likely not an issue for the OP, but the references to having adequate gain that are frequently seen in discussions of LOMC's strike me as only telling part of the story, and as being potentially misleading.

Best regards,
-- Al
Yes, I guess S/N would have an effect on whether one would prefer to use the highest gain available from the phono stage or set it to lower gain and use a SUT. I am SUT-less myself, never owned one. My revised Atma-sphere MP1 phono has, if anything, more gain than I ever need for any cartridge. I am thinking of ways to reduce gain, in fact.

Anyway, Goofy, your quote:
"3.3 Adjusting the gain
The gain can easily be adjusted on the 6 fold DIP switches „Gain Adjust“. The switches can be combined to get higher gain. The minimal gain of +30 dB is obtained with all Dip switches in OFF, the Maximum gain of +72 dB is obtained by putting all DIP switches to „ON“."

72db is more than enough gain for anything you might choose. In fact you can probably cut back a bit from that max amount of gain, using the DIP switches.
Right dead on Lewm. It's even recommended by ASR engineer Herr Schaefer that one uses the least amount of gain possible in order to reduce noise which to me just seems like practical sense. For my ears, I want clarity, balance, neutrality but I don't want sound waves blaring and bouncing around in my flat. Currently with the Grado ME+, I have the gain set at +12 db which I thought would be too high but it was at that setting when the music came out from under its rock. There is nothing to gain (no pun intended) by increasing it. I'm that way with my QUAD 2905's as well, once I step up to that place then its fine.
I would be interested in Al's thoughts on this, but my subjective impression is that with an "excess" of gain and the attenuator therefore in action, background noise typically seems to be lower (and dynamics much better) than when the gain setting is closer to the "minimum" necessary such that the attenuator is essentially out of the picture (meaning one has to turn the volume control nearly all the way up for adequate SPLs). I am not saying that my attenuator or any attenuator enhances sound quality; I am saying that the sense of musical ease and background silence seems superior with an excess of gain, or maybe what I am describing could better be thought of as the "correct" amount of gain. There is a lack of strain and better S/N, subjectively. This is a subjective judgement, not based on actual measurements of S/N. I know there are a lot of purists who would like to build equipment with "just enough" gain so as to obviate even the need for an attenuator; I don't hear it that way.
Hi Lew,

I doubt that it’s possible to generalize in a meaningful way, as every design is different, and has its own set of tradeoffs.

One thing that would seem safe to say, though, is that the signal-to-noise ratio of the signals that are ultimately presented to the speakers, and hence the amount of background hiss that is heard, can’t be any better than what it is at the front end of the signal path, which is to say the ratio of the output voltage of the cartridge to the noise that is present in the circuitry at the front end of the phono stage (aside from common mode noise that may be rejected if the phono stage is balanced). Noise that is present at that point will, along with signal, be amplified by every amplification stage that follows, and the signal level at that point will be lower than at every subsequent point in the chain.

A key factor with respect to your question, that I don’t have a specific feel for, is how much variation there will tend to be in the S/N performance of an adjustable gain phono stage as its gain is adjusted. My guess is that in general if the gain setting is increased by X db, while remaining within reasonable bounds relative to the cartridge output, the S/N performance of the phono stage will degrade by considerably less than X db, and perhaps not at all in some cases. That would be consistent with your observations concerning background noise, because if the gain setting can be increased without significant S/N degradation at that point in the signal path, the lessened significance of noise generated by downstream circuit stages, between that point and the volume control (relative to the increased signal level at those points), might result in a net improvement in S/N. It would also lessen the impact of noise that may be picked up at the interface between the phono stage and the preamp, as a result of ground loop or RFI/EMI effects.

But of course it might be a completely different story if what is being compared are DIFFERENT phono stages, whose gains also differ by X db, but whose S/N performances are not similar.

As far as dynamics are concerned, a number of additional unpredictable variables may come into play. One of those is the distortion performance of the various circuit stages in the chain, and how that distortion performance is affected by signal level. You may have seen Ralph (Atmasphere) comment in the past, in a different context (that of SET amplifiers), that since the 5th, 7th, and 9th harmonics of a note's fundamental frequency are significant determinants of our perception of loudness, an increase in those distortion components that occurs primarily on high volume transients will result in a subjective perception of increased dynamics. Since line level and phono level stages almost always operate Class A, and consequently there is no crossover distortion that would assume greater significance as signal level decreases, it seems possible that the effect he described could occur in those stages, as a result of the increase in non-linearity that may occur at high signal levels. So in some cases an increase in perceived dynamics might be the result of low level odd harmonic distortion produced by the circuit stages preceding the volume control, when those stages are asked to handle higher level signals as a result of a gain increase further upstream.

Perhaps Jonathan or Ralph will comment further on your question, as I’m sure they could speak to it more knowledgeably than I can.

Best regards,
-- Al