DIY balanced interconnects


I want to build some balanced interconnects.
1. Has anyone compared Swithcraft, Vampire and Neutrik XLR plugs?
2. Any comments on Mogami Neglex 2534 vs Vampire CCC-II vs Oyaide PA-02 cables.
3. Should the ground shield on these twinax cables be connected on both ends, only on the source ends, or on the preamp ends?
Thanks for your comments.
oldears
Atmasphere, I like your paper . . . it's interesting and eloquent; a good read. And I understand how it's an alluring perspective, especially as far as speaker impedance is concerned, for the manufacturer of an amplifier with a high output impedance. It's just too bad that the historical data doesn't support it - but we're never going to convince each other the opposite, so I'll drop it.

But maybe you could shed some light on why, if you're advocating a power-transfer approach (as is common on video and RF transmission systems), you're not using something like 110 - 150 ohms (source and load), the intrinsic impedance of a typical balanced interconnect? Because that's how a power-transfer system is supposed to work, no?
Kirkus, if you are referring to characteristic impedance, we in fact examined that issue about 20 years ago. The fact of the matter is that if you can determine the actual characteristic impedance of the cable and then terminate it correctly, the result is quite spectacular.

The problem is, you need a Time Delay Reflectometer or the like to make the determination- in practice a bit impractical. So you have no standard termination value as characteristic impedance can vary quite a bit due to minor changes in the construction and materials of the cable. This is likely why the industry settled on 600 ohms decades ago. It clearly was not accident.

In practice, the 600 ohm standard works quite well and since it was already in place, it seemed to be a matter of picking one's battles. As it is, the fact that our preamp is balanced has been its single biggest marketing problem, so in retrospect I'm glad its not been any more complicated that it already has been :)

I would have loved to have more input when we were setting a lot of this up but, but at the time we were the only ones that cared about balanced operation.

I can find plenty of historical evidence to support my paper BTW; the Radiotron Designer's Handbook, published by RCA is a good place to start. If you wish to discuss this further that would be a topic for another thread or email. In fact I would love to do that, the fact of the matter is I've been looking for someone who can rebutt the document ever since it was written (about 3 years ago). No-one has been able to do that so far. BTW, its not about having a tube amp with a high output impedance, its about whether you use feedback or not- IOW obeying the rules of human hearing or not.
Well, in order to really get into the specifics of your paper for analysis, there are a couple of issues that plainly need to be separated from each other.

For line-level interconnection, are transmission-line effects a significant factor in domestic hi-fi applications? What are the major electrical mechanisms that cause audibility of cables?

For the amp-speaker interface, the question is what are the primary motivations for a speaker designer to choose a particular voice-coil arrangement, cabinet alignment, and crossover network design, thus determining its impedance characteristics?

I feel that these issues can be investigated independently from that unsolveable audio argument - that of negative feedback in amplifier design. But negative feedback is the cornerstone of the perspective you outline in this paper . . . so an effective rebuttment of your paper is impossible without separating this out. How 'bout this . . . can you make the argument work without mentioning feedback?
Well, Kirkus, the issue is actually quite simple. What are the rules of human hearing? Once understood, then we simply apply physics to create the means that will obey those rules as close as possible.

The most important rule is how we perceive loudness, which is done by listening for the 5th 7th and 9th harmonics. Our ears are so sensitive to these harmonics that we can easily detect alteration of only 100ths of a percent. Audiophiles have words for this alteration: harsh, bright, clinical, brittle... -so for starters the design would have to honor this rule, as it is the most important.

With regards to cables, transmission-line effects do affect audio cables, both interconnects and speaker cables. The effects can be measured and correlate to listening as well.

Conductor spacing, size, geometry, purity of materials and the choice of materials are the issues that affect both what we hear and characteristic impedance. In interconnects it does seem that these issues are less important than they are in speaker cables (interconnects that we measured had characteristic impedances between 40 and 200 ohms- speaker cables varied from 4 to 60 ohms). I have to admit I was quite surprised to discover that characteristic impedance had any artifact at audio frequencies!

I've stayed well away from speaker design. Its easy to put drivers in a box and make them do something; it can be very difficult to make them sound like real music. There are a host of variables that can be quite vexing. I know enough excellent speaker designers that have fabulous results- I doubt I could do as well.

Plenty of material here for another thread...
The most important rule is how we perceive loudness, which is done by listening for the 5th 7th and 9th harmonics. Our ears are so sensitive to these harmonics that we can easily detect alteration of only 100ths of a percent.
No argument here, it's just irrevelant in as far as impedance/reactance in the cable interface is concerned, as all of this (including any transmission-line effects) are definable in linear network theory, meaning that additional harmonics can't be created. We're stuck with spectral balance, transient response, noise pickup, and source loading/resonance as the possible effects (which are headaches enough). But it does fit in with my biggest objection to 600-ohm loading: for random-audio-product-off-the-street-that-can't-drive-600ohm-load . . . its output stage will produce more of these noxious high-order harmonics.

So in respecting your preferences for a power-transfer approach . . . I would suggest a best practical way to implement your preferences for amp/preamp interconnection would be for your amplifiers to leave out (or raise the value of) the differential-mode termination resistor . . . thus improving the scenario for other manufacturers' preamps. You could then sell specific cables for which you had verified the intrinsic impedance, and adjust the output impedance of your preamplifiers to match. The cables would then have the appropriate termination resistor in the male XLR end, adjusting the total termination to match the cable impedance. This would seem to give the best possible performance in a wide variety of hookup scenarios . . . "automatically" adjusting impedances in the manner that studio/broadcast engineers have been doing manually for decades.

Now the whole amp/speaker thing is definately a different thread . . . but maybe another time. Been sitting at home with an injured back for a few days now, and I'm starting to go a bit nuts.