Is soundstage just a distortion?


Years back when I bought a Shure V15 Type 3 and then later when I bought a V15 Type 5 Shure would send you their test records (still have mine). I also found the easiest test to be the channel phasing test. In phase yielded a solid center image but one channel out of phase yielded a mess, but usually decidedly way off center image.

This got me thinking of the difference between analog and digital. At its best (in my home) I am able to get a wider soundstage out of analog as compared to digital. Which got me thinking- is a wide soundstage, one that extends beyond speakers, just an artifact of phase distortion (and phase distortion is something that phono cartridges can be prone to)? If this is the case, well, it can be a pleasing distortion.
128x128zavato
Some recordings with huge untampered-with soundstage include Dave Mason Live, Paul McCartney Live, Clash Live, Lou Reed Rock and Roll Animal, New Jersey Percussion All-Stars on Nonesuch label, and Jazz at the Pawnshop.
I’m not sure whether we’re not talking about it because it is obvious or because there’s some being obtuse afoot, but maybe worth retreating to first principles for a moment to try and find a common vocabulary about what “soundstage” is an how it happens.

As I understand it, you have to start with basic psycho acoustics. You’ve got two ears, one on each side of your head, at about a head’s breadth apart. This gives us stereoscopic hearing – which is basically the ability to triangulate distance and direction of the source of sounds based on the micro-second time differentiations between when the sound waves hit one ear and then the other. Two receptor points (ears) permits triangulation of the third (origin). Simple.

In its purest and simplest form, stereo recording does exactly the same thing. Two mics, set up about the same distance apart as your average pair of ears, records two channels of what – in theory – would be exactly the same two channels worth of acoustic queuing you’d hear with your ears were they in the same spot as the microphones. Thus, these two channels worth of information should be able to recreate the same ability to triangulate source and distance of sounds. Soundstage, at its purest and most abstract.

From there, this pure abstraction pretty much goes to hell. For a lot of reasons. Even assuming a perfect time and phase correct recording, the mission-critical micro-second queuing differences between the two channels are pretty small. There’s a whole lot of ways they can get messed up, just from reading them off of whatever medium is at hand, processing them, amplifying them, subjecting them to the whims of multiple power sources and AC lines, shooting them through all manner of wires, and ultimately off to a pair of speakers, which have to turn these electrical signals back into a physical process by vibrating just so in order to excite sounds waves. Good luck with that. But from there, things really go wonky, because these same sound waves – emanating from whatever vibrating bit, or typically several distinct bits, which may or may not really get along, is/are getting its/their shake on – then are loose in the room and have to find their way to your ears. They bounce off of stuff, stuff can get in the way, the room can resonate at weird frequencies, they can bounce into each other and either cancel each other out or get excited in strange and inappropriate ways, in short, they can get into all manner of trouble. Only then, if everything goes as planned and they get to your ears still carrying the same micro-second encoding as first recorded (again, assuming it was recorded in the first place) then you get to “hear” this intended soundstage. It’s a wonder it ever happens at all.

All of this, in turn, assumes a relatively simply miced, straightforward stereo recording of, we’ll call it, unamplified source noise. That, more often than not, is not a correct assumption. Multi-channel recordings, including separately processed, altered or even wholly-created electronic channels have no “original” soundstage to preserve. None. Rather, they present the building blocks for the sound engineer to create whatever soundstage they see fit. Maybe he’s looking at a multi-track recording of a symphony and wants to try to recreate a sense of the original soundstage through the mix. But, even assuming that’s the goal, it’s still using the captured channels to create an approximation, and then down-mixing into the two channels of a stereo recording. Fundamentally, what we might describe as the “soundstage” is frequently an arbitrary fabrication born out of the decisions of the recording engineer. (Which then have to make the odyssey back from the recorded medium, through the chain, to your ears. Again, good luck.)

Anyway, you get the idea. “Soundstage” is, in some regards, stupefyingly mysterious. But in others, it’s really pretty straightforward. There are all manner of things that can effect, mess with, or otherwise screw it up. But they are, by and large, knowable things. As for one medium of recording being inherently better than another at preserving/conveying/reproducing/etc this information – in theory, that makes no sense to me. In practice, who the hell knows? In any event, that's my theory and I'm sticking to it (unless and until I change my mind).
Mezmo, microphones do not "hear" the way human ears hear. There are dummy head microphone systems that attempt to replicate the human head/ears, but recordings made with them typically do not sound right when played back over loudspeakers.

Here's a primer on stereo mic recording techniques.
Who the hell cares? If there is a sound stage, why do I care if it's faithful to the original? I'm far more occupied with replicating the final version 'accurately'. That's what I care about. Because along with that goes the entire sphere of the performance issues of my gear. My goodness! Who of you have systems so perfect so as to have the need to focus on outside parameters? We only have control over decisions we make about our gear.
Hi All,

I have this related question and hope some knowledgeable members, especially Atmasphere, who do his own recordings, can share their views!

It is relatively easy to understand why we can create soundstage width in our stereo system, as there are 2 channels having slightly different information.

Then, how do we get soundstage height, since there is no top and bottom channels? Is it an artifact of speaker characteristics and in-room placement? Or is there some hidden information in the recording that can create this effect?

In many of the well set-up systems, I could hear clear, and relatively consistence (across the different systems), soundstage height! Yes, a higher ceiling and taller speakers seem to help in this area. So, it is hard for me to believe this is just an artifact!

Appreciate for all your comments!