What is wrong with negative feedback?


I am not talking about the kind you get as a flaky seller, but as used in amplifier design. It just seems to me that a lot of amp designs advertise "zero negative feedback" as a selling point.

As I understand, NFB is a loop taken from the amplifier output and fed back into the input to keep the amp stable. This sounds like it should be a good thing. So what are the negative trade-offs involved, if any?
solman989
The typical transit time of linear amplifiers is about 2000-3000 nanoseconds, which is too slow for effective implementation of global feedback and error correction.
I think this description nicely highlights so many of the conceptual and terminological errors that audiophiles and audiophile equipment designers have about negative feedback.

Looking generically at a solid-state feedback amplifier, their frequency response before feedback is defined by a single "Miller-compensation" capacitor at the voltage-amplifier stage. It is generally flat from DC to some frequency (i.e. 1kHz), and then rolls off at at 6db/octave all the way to the point to where the gain falls below unity, which may be something like 2MHz. While the gain and the frequencies may vary, virtually every common audio opamp has a frequency response that can be described like this. Again, we're talking about it WITHOUT feedback.

Since negative feedback only exists if the open-loop (feedback-free) gain is above unity, and since the open-loop response falls off at 6dB/octave . . . the input/output phase response must be 90 degrees or less. So if we're going to talk about "transit time", how would you define that? Since we know that comparing the phase at the input the output will give us 90 degrees, the "transit time" at 100KHz will be 2500 nanoseconds. At 200KHz, it will be 1250 nanoseconds. At 20KHz, it will be 25000 nanoseconds. So it seems that talking about "transit time", or "propegation delay", or "delayed feedback", or whatever . . . is a wholly inadequate way of understanding what's going on. Rather, classical Control Theory uses phase relationships to analyze feedback.

And classical Control Theory is wholly adequate to understand the circuit behavior when feedback is applied. Musical information isn't "time smeared" from "delayed feedback", it's simply that part of the amplifier circuit operates in quadrature for a huge chunk of the frequency range (in the case of our generic SS amplifier). Just like the filter slope of the very simplest first-order speaker crossover. And this phase relationship doesn't change whether or not feedback is applied (because it's defined by the Miller capacitor) . . . the feedback simply corrects the phase response at the output.
This lagging results in ringing artifacts and enhances ODD-order harmonics which are particularly annoying to the human hearing so even the smallest amounts of these distortions are highly noticeable.
Ringing when feedback is applied is indicative of an open-loop response that is something other than a simple 6dB/octave slope, and this may be due to factors both in the circuit itself and the load it's driving. And this is indeed something that commonly can occur in the real world. But this phenomenon is wholly analyzable with classical Control Theory, and a careful analysis of the amplifier's stability. Further, this type of analysis virtually always reveals the specific mechanisms responsible for the subjective complaints associated with negative feedback.
There are good sounding components using feedback and no feedback, which is simply more proof you need to listen to the component, because the component really is an extension of the skills and philosophy of the designer, and there are good skilled designers employing both methods.
Precisely.
" Odd ordered harmonics are exacerbated by noise problems in the ground and the power supply..."

Fully agree here with Atmasphere. The more regulated (and noiseless) power supply the better sound quality will be. One can assert that the quality of the power amplifier is not in its signal path so much as in its power supplies. And in many (but not all) cases I would agree with it.

Simon
Distortion has the property of masking detail in addition to adding loudness cues, so if you can get rid of distortion you get greater transparency and greater smoothness at the same time, provided your techniques for getting rid of distortion don't enhance the 5th, 7th and 9th harmonics. IOW real reductions in distortion have real, immediate sonic benefits that anyone can hear: extreme detail accompanied by smoothness are the hallmarks to look for.
Absolutely true. And there is absolutely no design technique or topology (tubes, solid-state, Class A operation, balanced push-pull, local or global negative feedback, etc.) that can guarantee meaningful improvements in audible distortion. It of course comes down to the proper implementation of a wide variety of techniques.
Kirkus, my technique for measuring propagation delay is simple: compare the input to output while using a squarewave source. Observe the difference in time between the rising input waveform and the rising output waveform. That's the delay time. I have yet to see an amplifier where I could not see that on the 'scope.

Since negative feedback only exists if the open-loop (feedback-free) gain is above unity, and since the open-loop response falls off at 6dB/octave . . . the input/output phase response must be 90 degrees or less. So if we're going to talk about "transit time", how would you define that?

It really seems to me that something is glossed over here. In this model phase and time become the same, and is inadequate to explain the behavior of an amplifier that has wide (+200KHz) open loop bandwidth. In such amplifiers the model below falls apart:

Since we know that comparing the phase at the input the output will give us 90 degrees, the "transit time" at 100KHz will be 2500 nanoseconds. At 200KHz, it will be 1250 nanoseconds. At 20KHz, it will be 25000 nanoseconds. So it seems that talking about "transit time", or "propegation delay"[sic], or "delayed feedback", or whatever . . . is a wholly inadequate way of understanding what's going on. Rather, classical Control Theory uses phase relationships to analyze feedback.

Propagation Delay does not alter with frequency anywhere near the audio band, and at those frequencies the delay time is easily measurable. In fact, we can see that at low frequencies feedback works pretty well, but as frequency increases, the feedback is progressively inadequate due to the fixed propagation delay of the circuit having a larger effect as the waveform time decreases. This introduces a time-domain distortion- ringing and odd-ordered harmonic enhancement. It is this phenomena that requires networks in many amplifier designs to prevent negative feedback from becoming positive feedback due to the phase at very high frequencies that are out-of-band but can cause the amp to go into oscillation if not addressed. The model you are proposing relies on propagation time being mutable, which it certainly is not. I'm with Spectron on this one. Sounds to me like control theory is being misapplied here.
The model you are proposing relies on propagation time being mutable, which it certainly is not.
Atmasphere, forgive me if I'm being a snot . . . but I think you need to brush up on some basic electrical theory. Pole/zero networks do indeed have different delays based on frequency. If you don't believe me, try constructing a simple R-C lowpass network with, say, a .47uF capacitor and a 750 ohm resistor. Compare the "Propegation Delay" between input to output, using SINEWAVES, at 10KHz and 20KHz. For the former, you will find it to be about 24uS, for the latter about 12uS. For both, the phase shift is about 90 degrees. Or you can do it in SPICE in just a few minutes.

Again, some basics here. A real-world amplifier circuit contains mechanisms that produce both frequency-dependent and frequency-independent delays. In a typical well-designed Miller-compensated amplifier, the goal is to choose the compensation capacitor so that the frequency-independent delay is completely swamped by the frequency-dependent delay of a first-order slope, yielding a phase margin of 90 degrees at all frequencies above unity gain.

Here's the conceptual error with your square-wave timing test. If we assume that it's indeed a perfect square-wave on input, and the circuit in question doesn't have infinate bandwidth . . . then the output square-wave will have a longer rise time and more rounded leading edge than the input. So we set up our scope, and use the markers to decide where to measure on the x-axis. For the input side, it's easy to locate the marker because the rise-time is infinately short. But on the output, it's comparatively slopey and rounded . . . so when you look at the output and place the marker, the exact placement across the slope determines for which frequency you're measuring the delay. If you just place the marker where it "looks about right", then you're simply meauring the delay of "kinda one of those frequencies" . . . one of an infinate number contained in the perfect squarewave on the input.

But really the time-honored method is to use X/Y mode on your scope to compare the phase as you vary the frequency of a sinewave. You can then CALCULATE the precise delay for any frequency, based on phase. And no, there won't be just one number.