Theoretical Pre Amp Question


Real world answer would be to listen to it both ways and pick, because execution matters, but theoretically...

If a source has a choice of high (2V) or low (1V) output, then at typical listening levels the pre amp will be attenuating the signal to much less than 1V. Which source output level SHOULD be better? Is there likely to be more distortion or noise from a pre at lower or higher input level, even though either would use less than unity gain? If specifically using a tube pre amp, SHOULD the source level have an impact on how much “tubiness” comes through even though there is negative gain? What about potential interconnect effects? Wouldn’t a higher level signal be more resistant to noise as a %?

In an ideal theoretical case there is no distortion or noise. In a real world, empirical test the implementation dictates results. I’m just curious about the in between case of typical expected results based on standard practice and other people’s experience 


cat_doorman
If a source has a choice of high (2V) or low (1V) output, then at typical listening levels the pre amp will be attenuating the signal to much less than 1V. Which source output level SHOULD be better?
Always best use all the source has to offer (2v in your case), if it’s not stressed, (which only a bad designer would do)

Then if you find you have too much gain down the line because your volume control is at 8-9am for loud already, then cull the preamp and go unity gain active or passive pre, (if impedance >1:10 ratios are fine, usually are today.)

Quote from Nelson Pass

Nelson Pass,

“We’ve got lots of gain in our electronics. More gain than some of us need or want. At least 10 db more.

Think of it this way: If you are running your volume control down around 9 o’clock, you are actually throwing away signal level so that a subsequent gain stage in a preamp can make it back up again.

Routinely DIYers opt to make themselves a “passive preamp” - just an input selector and a volume control.

What could be better? Hardly any noise or distortion added by these simple passive parts. No feedback, no worrying about what type of capacitors – just musical perfection.

And yet there are guys out there who don’t care for the result. “It sucks the life out of the music”, is a commonly heard refrain (really - I’m being serious here!). Maybe they are reacting psychologically to the need to turn the volume control up compared to an active preamp.”



 Cheers George



Simple fact is, the more gain stages that aren’t needed and have, the more distortion and noise you introduce.
Be nice if a source had an output stage with enough voltage gain and current to drive a speaker direct.
(a smart manufacture could easily do it, but probably receive a bullet from all the preamp and amp makers out there)

Such a dac was made almost by Musical Fidelity in the A3/24 not a great sounding dac per-se (delta sigma), but the Darlington output stage and power supply were strong enough and capable of giving 10w or more, with enough gain and with these dc coupling mods could drive say, a Klipsch-Horn quite loud. https://ibb.co/6FDxS3x

Cheers George

I didn’t think through the circuit. The answer is pretty obvious after that. Of course there are 3 basic categories 
case 1:buffer, gain, attenuation - this might have issues with variable output impedance similar to a passive depending on implementation
case 2: buffer, variable gain, buffer - I now remember something about the PS Audio Gain Cell varying gain instead of attenuating signal.
case 3: attenuation, gain, buffer - this keeps the gain and output impedance constant
For a tube pre I think case 1 would impart more constant tube character because it is running at constant power and only attenuating after. Case 3 would be more dependent on implementation of the gain stage. With sufficient bias a linear response wouldn’t color the signal more at higher volume than lower. 
Seems like running hot is the way to go. Unless there ends up being another reason not to.

Thanks for pointing me in the right direction guys. 
PS Audio Gain Cell varying gain instead of attenuating signal.
Trouble with that one your usually varying the feedback to give your different gains and that in it self is "changing the sound" of that gain section.
More feedback less gain (lower distortion).
Less feedback more gain (more euphonic/distorted).
Kinda the opposite of what you want/need.

There's no free lunch is there.

Cheers George   
Benchmark's recommends highest gain settings at pre and the lowest at the power amp.  They provide option of 22dBu (9.8VAC) input in AHB2 power amp.  I understand that they want to move gain from noisy environment (power amp) to quieter environment (pre), not to mention less interconnects sensitivity to ambient electrical noise (better S/N).  Long time ago, mostly in Europe, they had -10dBV  (0.316VAC) standard for line level.  They believed that it will save money since only one item (amp) had more gain stages, while multiple sources had less.  I assume it didn't work (too noisy?).  The most common for line level in US is likely +4dBu (1.23VAC), but I assume that for preamp output it has to be higher since AHB2s lowest level input setting is 8.2dBu (2VAC).  Is there any standard for power amp input? Most of the time 2VAC is mentioned.