Theoretical Pre Amp Question


Real world answer would be to listen to it both ways and pick, because execution matters, but theoretically...

If a source has a choice of high (2V) or low (1V) output, then at typical listening levels the pre amp will be attenuating the signal to much less than 1V. Which source output level SHOULD be better? Is there likely to be more distortion or noise from a pre at lower or higher input level, even though either would use less than unity gain? If specifically using a tube pre amp, SHOULD the source level have an impact on how much “tubiness” comes through even though there is negative gain? What about potential interconnect effects? Wouldn’t a higher level signal be more resistant to noise as a %?

In an ideal theoretical case there is no distortion or noise. In a real world, empirical test the implementation dictates results. I’m just curious about the in between case of typical expected results based on standard practice and other people’s experience 


cat_doorman
You keep blocking both hypothetical and realistic evaluation. If there’s no distortion or noise, then why does it matter?

I think it may help you to understand a little about how preamps are usually (but by no means always) designed.

There is an input buffer, a gain stage and then at the end the volume control.
For historical reasons, preamps of the past had what we should consider far too much gain today.  If you imagine what it was like for radios to pick up very weak stations for instance, you'd understand why so much additional gain might be desirable.

99.99999% of the additional noise in a preamp comes from the unavoidable gain stage. So, if you can significantly reduce the gain, a trick you should consider in older tube pre-s, you have a much cleaner signal on the output, regardless of volume control setting.
If a source has a choice of high (2V) or low (1V) output, then at typical listening levels the pre amp will be attenuating the signal to much less than 1V. Which source output level SHOULD be better?
Always best use all the source has to offer (2v in your case), if it’s not stressed, (which only a bad designer would do)

Then if you find you have too much gain down the line because your volume control is at 8-9am for loud already, then cull the preamp and go unity gain active or passive pre, (if impedance >1:10 ratios are fine, usually are today.)

Quote from Nelson Pass

Nelson Pass,

“We’ve got lots of gain in our electronics. More gain than some of us need or want. At least 10 db more.

Think of it this way: If you are running your volume control down around 9 o’clock, you are actually throwing away signal level so that a subsequent gain stage in a preamp can make it back up again.

Routinely DIYers opt to make themselves a “passive preamp” - just an input selector and a volume control.

What could be better? Hardly any noise or distortion added by these simple passive parts. No feedback, no worrying about what type of capacitors – just musical perfection.

And yet there are guys out there who don’t care for the result. “It sucks the life out of the music”, is a commonly heard refrain (really - I’m being serious here!). Maybe they are reacting psychologically to the need to turn the volume control up compared to an active preamp.”



 Cheers George



Simple fact is, the more gain stages that aren’t needed and have, the more distortion and noise you introduce.
Be nice if a source had an output stage with enough voltage gain and current to drive a speaker direct.
(a smart manufacture could easily do it, but probably receive a bullet from all the preamp and amp makers out there)

Such a dac was made almost by Musical Fidelity in the A3/24 not a great sounding dac per-se (delta sigma), but the Darlington output stage and power supply were strong enough and capable of giving 10w or more, with enough gain and with these dc coupling mods could drive say, a Klipsch-Horn quite loud. https://ibb.co/6FDxS3x

Cheers George