Musicians in your living room vs. you in the recording hall?


When it comes to imaging, soundstage and mimicking a recorded presentation, which do you prefer?
Do you want to hear musicians in your living room, or do you want to be transported to the space where the musicians were?
erik_squires
@tatyana69- Thank you, for so concise an explanation, of sound stage/sound space. ie: "Backing vocals or whatever coming from a DIFFERENT PLACE IN THE ROOM is ESSENTIAL to add to the DIMENSIONS of the music." That’s exactly what most of us desire("want to extract") from our music. As mentioned, so many times: providing that’s what’s been recorded/intended.
Post removed 
@prof1- Back when directionality in our hearing was a survival skill, there were no symphony orchestras. Had there been, chances are: they wouldn’t have eaten too many audiophiles then, either. Then again, if an orchestra’s hitting one, with fff or ffff(ie: Firebird Finale), that’s also an, "attack"(Semantic Gymastics, just for fun). Happy(and safe) listening! ;-)
Directionality is a really complex thing. Definitely useful. I can walk around in the total dark and sense walls (not very accurately but I can manage at a snails pace) - small obstacles are beyond my hearing acuity.

Below 2000 Hz we use the time arrivals of the sound at each ear to work out left right position. Above 2000 Hz we use the relative loudness of the sound (as the head blocks out frequencies above 6000 Hz very effectively (even for small angles off axis like 30 degrees). For the above reasons I believe phase is very important. If high frequencies are delayed by your typical Minimum phase filter or MQA then imaging won’t be as precise because location cues arrive later than they should.

Front, back and up down directionality is more complex. We use the floor reflections which cause comb filtering to work out height. We also use the phase distortion caused by our pinea to work out front and back and to a less extent up down.

Anyway, like a dog, we will obviously tilt our head or move side to side to better deploy our location capabilities especially as high frequencies are so heavily attenuated or blocked by our head.

I would say we can detect the direction of a sound to within two or three inches from 20 feet away given enough sonic info (won’t work for a 100 Hz tone where directionality is challenged)
There are four dimensions for a given space. The three physical dimensions - length, width and depth x, y, z are determined for a live recording by reverberant decay, room reflections, echo and other acoustic properties of the recording space picked up by the microphones. The fourth dimension - time - allows the human brain to integrate the physical parameters to calculate velocities and locations, dx/dt, etc.

Squirrels, by contrast, have very poor integration skills.

If time was not real man would have to create it. 🤗