I started my career in audio with the invention, patenting, and prototyping of the Shadow Vector quadraphonic decoder, back in 1973. That SQ/stereo decoder was specifically designed to preserve ambient cues and spatial impression, without adding anything like a reverb circuit, or taking anything away from the source material.
There’s a lot of content in a 2-channel recording that is destroyed on playback, or is below the threshold of audibility. This is not the fault of the recording, but the playback system. In general terms, this is low-level information with L/R phase angles between +/- 45 and 180 degrees. A (very good) quadraphonic decoder will route this information to the sides and rear, depending on phase angle, without affecting the frontal image or deforming it. Ideally, random-phase reverberation (from spaced mikes, reverb plate, or good digital reverb) should appear as an evenly weighted sphere around the listener, with no bumps, holes, or hotspots, just as it is in real, physical acoustic spaces. Again, this random-phase information is present on all stereo recordings with even a slight sense of space, because studio professionals consider "dry" vocals intolerable, so some reverb is added on just about everything. And the correct method of presentation is spherical, to match real acoustic spaces.
Unfortunately, 2-speaker playback abbreviates the most realistic spatial presentation, although some speakers preserve a vestige of it. Smooth dispersion patterns, freedom from resonant energy storage in the drivers, and freedom from diffraction artifacts (no sharp cabinet edges) can allow the sound space to leave the confines of the speaker cabinet (as it should in a good loudspeaker). Most listeners never hear this, but it’s still there on the recording, waiting to be heard. (And no, it doesn’t take 11 speakers to preserve spatial information. That’s for special effects in movie theaters.)
For some reason, electronics can also affect the spatial impression. I suspect that many electronics destroy, or alter, the low-level interchannel signals that convey this spatial impression, somewhat akin to MP3 lossy compression discarding "unnecessary" low-level bits. Nothing as violent as that happens in normal electronics, of course, but still, it subjectively sounds like bit reduction, with a loss or "air", spatial realism, and realistic tonality. I am not sure of the mechanism, but high-order nonlinearities, power supply switch-noise grunge, correlated noise, and odd, hard-to-pin-down capacitor colorations (possibly chemical reactions in the dielectric) all seem to play a role in shrinking the sound stage and destroying the ambient impression.
That’s why the Raven and Blackbird minimize energy storage in the signal path. There are no feedback loops, either local or overall. There are no coupling caps, on the input, between stages, or on the output. The balanced circuit presents a nearly constant demand on the power supply, which is further smoothed by the shunt regulator tubes in the preamp. The signal goes in, is fed to a Class A balanced pair of very linear vacuum tubes, and is transformer re-balanced on the way out. No signal recirculation, no phase inverters, no cathode followers, and no secondary side chains (DC servo circuits, dynamic loads, etc.), even at very low levels (which is why it is so quiet).