4yanx,
I certainly agree that the "belt drive zealots" do exist, and I number among them, although I am quite willing to consider any table that sounds better than what I have now.
However, it would have to sound better, and not just have some particularly low "wow and flutter" measurement to get my attention.
Perhaps I'm calling this incorrectly, but it certainly appears to me that there is an underlying meaning to this measurement activity(and maybe not so "underlying" at that). Generally, the root of it is to make some specification be the determining factor in purchasing, so as to "make it easier" to decide what to buy. Such as, "this turntable 'X' has incredibly low measured 'wow and flutter', which certainly would mean that it sounds better than a turntable with some slightly higher measured levels". That's what is concerning me.
At least, that is what it led to in the past, and to some extent, it still is used by some for that.
Please let me elaborate.
When measurements become the benchmark for purchasing decisions, companies then build their equipment to do well at the measurement protocol, and not necessarily to sound good. This is because when a "spec race" occurs, it means a better bottom-line for a manufacturer to appear very good at this spec, in order to make sales.
There is historical proof for this, such as the "spec wars" that occured in the 70's and 80's with the THD specifications in amplifiers.
The "THD spec" became the benchmark for what amplifier would be purchased by a consumer, with the ostensible "reason" being that if the THD was lower, or even virtually non-exisitent, that the amplifier would be the best-sounding one, or even "perfect" because there was virtually no distortion measured, IN THE MEASUREMENT PROTOCOL.
As we all now know, this protocol consisted of comparing signal-in to signal-out and the difference would be termed "distortion", WHEN TESTED ON AN UNCHANGING 8-OHM TEST LOAD RESISTOR AS THE OUTPUT LOAD, WITH AN UNCHANGING STEADY SINE-WAVE SIGNAL INPUT.
Please forgive the history lesson, for those who already are aware of this.
The result was that amplifier manufacturers began dumping huge amounts of negative feedback(local and/or global) into the amps, so that all the measured distortion became so ridiculously low that it was considered much lower than anyone could ever perceive, and thus the signal output was considered "perfect". Naturally, at no time did sound quality ever intrude into this quest for "the best specs", because whatever came out of a "perfect amplifier" would surely be "perfect", right? As we know now, that was terribly wrong.
The measurement protocols were not designed to measure the amplifier when it was playing music. Therefore, the feedback ruined the sound quality of the amps, and it became apparent that some amps that "tested terribly" sounded remarkably better than the "perfect" amps.
Trying not to get too verbose, going back into this kind of mind-set by "leaning" on artificial number specifications is a very dangerous road to embark upon. It leads away from the desired end of musical performance to the ear, and leads toward the end of maximizing to a test procedure.
Those who do not learn from history are destined to re-live it.
That is all.
Twl out.