While I enjoy and value Atmasphere's posts on the subject, I will take issue with the major point in the paper he presented. I don't see that these two paradigms exist at all . . . except in a hypothetical world where there is a simple, binary choice in available loudspeakers: Apogees and Lowthers.
If you look at the symbiotic evolution of amplifier and speaker designs over the past eighty years or so, it's commonly accepted that an increasing abundance of amplifier power enabled loudspeaker designers to trade efficiency for other factors, such as smaller cabinet size and improved linearity. But it has been the loudspeaker designers that have, in turn, been consistently demanding more "current impervious" performance from the amplifiers. This is why the hallowed amplifier designs of the pre-war era were triode designs: yes, for linearity, but just as importantly, for lower output impedance. Even an Altec VOT system and an Altec 604 duplex monitor would have presented very different impedance curves to the amplifier. And in either case, a flat frequency response from a linear amplifier was highly desired.
Even seventy years ago, loudspeaker designers were working with a voltage-source model, not a current-source model. While the reasons for it are my own speculation, they seem pretty obvious. First, high-frequency transducers almost always have a huge efficiency advantage over low-frequency ones. Second, advances in transducer technology are mostly advances in materials (diaphragm materials and suspensions, magnetic materials), and mathematical modeling (horns and lenses). Designing loudspeakers and crossovers to effectively take advantage of what the transducers have to offer is extraordinarily easier, and achieves better results, when working from a voltage-source model.
The presence/absence of multiple impedance taps on amplifiers, for this discussion, is a non-sequitur. If one wanted to design a conventional transformer-coupled tube amp that put out 50 watts into 16 ohms, 100 watts into 8 ohms, 200 watts into 4 ohms, etc. from a single output tap, it could be done . . . there would simply be huge tradeoffs in terms of efficiency and performance into a given impedance. Very similar tradeoffs also exist in solid-state amplifier design . . . the difference is one of cost and benefit. If you already have an output transformer, then adding additional taps usually makes sense. If you don't . . . then it's of course bit harder and costlier.
My point is that there really is no "Current Paradigm". The interface between high-fidelity amplifiers and their respective speaker systems have ALWAYS been based on a voltage model. (The term "high-fidelity" is meant to simplify the discussion by excluding things such as field-coil speakers and 70V distribution systems, not a snub to anybody's amplifier design.) And high-fidelity amplifiers have always been expected to have reasonably "current impervious" operation. What "reasonably" means in absolute terms is a debate that has been around many years longer than solid-state amplifiers . . . but if an amplifier's output is intended for a "4-ohm" load, then I would expect it to be fairly "current impervious" over the range of current that a "nominal 4-ohm" loudspeaker would require, plus some extra for good measure. Most good conventional tube amps achieve this.
I maintain that a high output impedance, for a high-fidelity audio power amplifier, is ALWAYS a liability, period. Now it may be that some of these amplifiers have other performance aspects that outweigh it, and some speakers are tolerant of it (and a few even subjectively improved). But this idea that there's one branch of the speaker-design profession that optimizes their products to work with amplifiers that have high output impedances? I don't buy it. If there is, then exactly what is the output impedance that they're expecting?