Why not horns?


I've owned a lot of speakers over the years but I have never experienced anything like the midrange reproduction from my horns. With a frequency response of 300 Hz. up to 14 Khz. from a single distortionless driver, it seems like a no-brainer that everyone would want this performance. Why don't you use horns?
macrojack
Prez, I hate to keep beating you down but your analysis with dB is also off

Not sure where you got 26dB but that does not equate to 400W. Here is how it works.

The sensitivity of the amp is 1V, that means it takes one volt to drive the amp to maximum output.

Of course these amps won't do it but to get 400W into 8 ohms you would need an output voltage of 57V. If you had one volt in that is a gain of 57. A gain of 57 is 35dB = 20 log 57.

To get 75W@8ohms you need 25V out. That is 28dB.

To get 75W@4ohms you need 17V. That is 25dB.

To get 150W@8ohm you need 35V. That is 31dB

I rounded things off but that is real close. It has a different amount of gain for each scenario because you use different taps on the output transformer to get different voltages.

.
Hi Herman,

I, of course, agree with your technical statements about amplifier power, etc., as far as they go. And they are not inconsistent with my previous post. However, keep in mind that the amps we were discussing as examples are solid state amps. So rather than using higher taps on the output xfmrs, I would guess (as I indicated earlier) that either the voltage rails on the non-paralleled amp configurations are the same as on the otherwise similar paralleled configurations, or can be internally selected between voltage values appropriate to each configuration.

As you will realize, the paralleled configuration simply provides the current capability and/or heat dissipation capability necessary to support the application of the higher voltage to the given 8 ohm or other load.

And Prdprez has now agreed that we are talking about maximum power ratings, not about paralleled output stages forcing more current into a load without a voltage increase.

Prdprez,
All three of my examples have listed specs of 26dB gain. This equates to 400W, correct? (under ideal circumstances)
Yet the other specs are listed as 150W and 300W. The maximum wattage is not specified for paralleling the largest amp (VK-600m) So we can probably assume that it tops out at 400W maximum (still 26dB gain) with overkill ability on the current side.
I don't think we can say how much power the VK-600m can supply, without more information than appears to be provided on their site. Besides there being no spec on maximum output power, unless I missed it there appears to be no spec on input sensitivity (i.e., how much input voltage is required to produce the rated output).

26db gain (the spec for the VK-600 and VK-255SE) simply means that the output voltage will be 20 times larger than the input voltage, provided that the output voltage, current, and power, and the corresponding heat dissipation, do not exceed the amp's capabilities.

Thanks for the nice words.

Best regards,
-- Al
"ALL speakers have colorations."

All recordings do as well. Same true with every piece in every system out there. So the idea of no coloration is not reality and not worth losing any sleep over. If you enjoy what you hear, then that's about as good as it gets.
With regards to amplifier power and amplifier types used on horns:

My speakers are horn hybrids, using a high efficiency pair of 15" woofers, which limit the efficiency to 98 db.

If I push the system hard, I am really challenged to clip a 30-watt amplifier. But I prefer 150 watts if I can get it, even though I will never use the power. In my case, because I am using OTLs, the amps have no issues at low power: the less power, the less distortion, quite similar to SETs.

The system plays very clean and is devoid of loudness artifacts. The only way you can tell how loud it is playing is if you try to talk to someone sitting beside you or if you have a sound pressure meter.

Amplifiers like the kind I am using (zero feedback) tend to make more distortion following a curve where the distortion becomes more pronounced as you approach clipping. Distortion is where loudness cues come from- the result being that while I can get satisfying volume from the 30-watt amp (and right now I am playing a pair of type 45-based P-P amps that only make 5 watts), the simple fact of the matter is that there are less loudness cues when I use the bigger amps.

(FWIW, the Trio is only horn speaker I know of where the designer intended it to be used with transistors. This is reflected in the crossover design, or lack of it, which consists of capacitors to roll off low frequencies for each driver. This results in an impedance curve that is nearly 19 ohms in the bass horn, but only about 4 ohms at the tweeter frequencies, even though the individual drivers are all nearly the same impedance. No low power tube amp is going to drive this right as the speaker is what I call a Voltage Paradigm technology, whereas most low power tube amps and other horns are Power Paradigm technology. see http://www.atma-sphere.com/papers/paradigm_paper2.html for more info.)

In a case where the speaker is 10 db more efficient than mine, the need for power does get eroded; 15 watts is a lot of power on such a speaker but IMO you do this to reduce loudness artifact from the amp. Horns tend to be very reactive loads and so are often shouty and shrill if the amp used has a low output impedance, particularly if that low output impedance is due to loop feedback in the amp (the back EMF tends to get into the feedback loop, causing the feedback signal to contain false information). This is one reason why horn users who get good results rarely use transistors, and a major reason why many people think that horns are for PA, not hifi. In a nutshell, its an equipment mismatch.