My understanding is that the concept of “bandwidth” is very important. Perhaps more important for tube amps than solid state. But I have a First Watt SIT-3 which is a low watt solid state amp. It apparently has some bandwidth to it because it drives my Magico A3s well. They have a sensitivity of 88dB, an impedance of 4 ohms and recommended minimum power of 50 watts. The SIT-3 drives them just as well as my Bryston 4B3, which has a lot more power. But what I would like to know from the group is: how is bandwidth measured? How do you calculate bandwidth?
Bandwidth is important to keep phase shift at a minimum. To this end, phase shift is minimized if bandwidth is 10x the maximum frequency to be amplified (20Hz, so 200KHz required) and also 1/10th the lowest frequency to be amplified (20Hz, so 2Hz response required). Bandwidth is measured by either a sine wave or square wave; with a sine wave the signal is applied to the circuit and the output observed to be within usually + or - 1/2dB to be considered ’flat’; with a squarewave rounding of the edges can be seen to show a rolloff at high freqencies and tilt on the top of the squarewave shows a rolloff at low frequencies. This is fairly easy for transistor amps, and there are tube amps that meet the ’2Hz-200KHz’ requirement too, but to my knowledge they are all OTLs (Output TransformerLess).
Keeping phase shift linear has to benefits: more accurate presentation of the soundstage and more accurate presentation of tonality. As an example of the latter, if there is a rolloff at 10Hz, phase shift will cause a lack of impact up to about 100Hz despite the amp measuring flat to 20Hz on the bench. This is why if there is a problem at 50KHz it can often be heard as well, since phase shift artifacts will exist down to 5KHz. Again, this will be interpreted by the ear as a tonality.
So one takeaway: three things affect tonality: actual frequency response (which is different from bandwidth), distortion and phase shift.