Soundstage Width and Depth


I’m curious about what your systems produce when it comes to soundstage. My speakers are about 8’ apart and I sit about 10’ from the front plane of the speakers. The speakers are toed in so that they each are pointed at a spot about 8” from my ears on each side. (Laser verified) My room is treated with bass absorption and diffusers.

In many recordings my soundstage is approx 28’ wide and, although this is tougher to determine, I would say on most recordings I’m hearing sounds 10’-15’ further back than the speaker plane. Some sounds, usually lead guitars, are presented slightly in front of the plane of the speakers. There are also recordings that produce height in the soundstage. Some fill the room floor to ceiling, while others are more on the same plane about 5’ from the floor. I do get layers usually in about the same order, guitars, lead singer, bass guitar, drums, violins and backup instruments and singers in order front to back. Again this is recording dependent. Intimate recordings that feature a singer playing a guitar usually has all of the sound between the speakers. Is this what everyone experiences? Could the depth be deeper? Do many of you hear sounds in front of the speaker plane? Do you have any recordings that accentuate the front to back soundstage?
128x128baclagg
Yes, this it the most current knowledge and there are no indication it is incorrect, but even with these cues, it can be difficult to accurately assess height. I spent a number of years doing R&D on hearing aids and similar audio "devices". Our group believed we were one of the first to look at how the design of the hearing aid could be improved with the goal of preserving positional cues most take for granted. Unfortunately that R&D was abandoned after I left as well as other programs to pump up the balance sheet before selling. It was a bit contentious at the time as well. It indicated issues with signal processing delay differences masking timing cues.

"Technically",  just as you have indicated, frequency filters that mimic the pinna, can provide a sense of height in head-phonic playback and encoded in only two channels. There has been a fair amount of research done with HATS (head and torso simulators) for recording, but, as you indicated, it requires tailoring to the individual to work properly. If you attempt that technique with speakers, you get not only the HATS transform, plus the listener ... and two pinnas are not better than one.  W.R.T. your particular situation, making a wild ass guess, the microphone above his head, if not omni and not pointed at him, created a filtering effect that simulated height with pinna filtering. Curious if the wavefront from the electrostats is less impacted by torso/head/pinna than would normally occur with dynamic speakers.  Interesting!  I may have to pick up a pair now and do some testing.


Speaking of interesting, to the last post about difficulty of creating a stable image outside the speakers, have you done much research on ambiphonics?


Regarding height cues out in the "real world", my understanding is that the way sound diffracts around the head and outer ear (the pinna) from above is what gives us height cues. I have read papers and articles about encoding these "head and pinna transforms" into a signal to convey height information, but to really do it right, the equalizations would have to be tailored to the individual's ears. (One possible application would be in the helmets of fighter pilots, so that an audible threat warning would also convey the direction. Head position tracking would of course have to be included.)  

I don't see how height information could be encoded in a normal two-channel recording... BUT something weird happened to me years ago.

I’m speechless that people still talk about being surprised by sound outside the speakers. You’ll have to excuse me but isn’t that a little archaic? As in 1970s? Come on, guys. The best and easiest way look at soundstage imho is that the better the output signal the larger the 3 dimensional sphere of the recording venue will be presented. When you finally get your system working the expanding sphere of soundstage should be well-defined in width, depth and height. A wonder to behold. 🤗

“An ordinary man has no means of deliverance.” - old audiophile axiom
Heaudio123, I have not delved into ambiophonics to the point of understanding it, but at least for a while Ralph Glasgal was using SoundLab speakers. I am under the impression that with ambiophonics the listener’s position is critical, and I’m more inclined towards wide-sweet-spot presentations. 

Somebody - might have been Ralph? - once used SoundLab speakers as microphones... not very practical, but from what I was told the results were pretty good, at least when played back through the "microphones".  

Duke
@geoffkait wrote: " The best and easiest way look at soundstage imho is that the better the output signal the larger the 3 dimensional sphere of the recording venue will be presented. "

I’ll concede that what you propose is the "easiest" way to look at soundstage, but I’m not sure it’s the "best" because it is incomplete.   Neither does it tell us anything about how or why, nor offer guidance as to how we might make improvement. 

Duke
Ambiphonics:  I would start with this:   https://cdn.website.thryv.com/7b2b654758d449b08935c9dfa207e8f9/files/uploaded/Ambiophonics_Book.pdf

Then read this article on methods that are more robust:  https://www.microsoft.com/en-us/research/wp-content/uploads/2013/10/Ahrens2013a.pdf

While there is criticality of listener position, it is much more robust than ops "fluke" that requires perfect everything to "maybe" work.