Thank you for the very helpful comments.
Here's what I distill from the comments. The graph of a waveform with axes of frequency and amplitude can describe a sound, but it's the changes in the waveform over time that provide meaning to the sound. So the envelope of a note over time that includes, perhaps, a percussive attack and it's changing frequencies and their amplitudes as the note decays that provide clues to what instrument is making the sound. And the listener's brain recognizes the "flute-iness" of the sound over time, or whatever instrument is playing.
OK, I get that. But I'm not sure that adequately explains how we discern multiple instruments from a single changing waveform or the position of the instruments in 3D space. I'll have to think more about that.
I acknowledge those of you who have pointed out the role of two ears spaced apart from each other. But I think there must be more to it. After all, there are guys like David Pack, a wonderful musician and record producer who has been totally deaf in one ear for the last 40 years. His work is not devoid of spatial information. Nor is a single, full-range speaker incapable of all spatial presentation.
Apologies to those of you who think this is a simple-minded discussion. I find it deeply profound and an astonishingly complex interaction of physical and cognitive phenomena, worthy of reflection. Well, there are those who stand at the rim of the Grand Canyon and think, "hey, it's a water-carved canyon, so what?"
This discussion reminds me how awesome is technology that is capable of reproducing this extraordinarily intricate physical process to a degree that sublime enjoyment is possible.
Conversely, it's equally awesome to realize that the brain is capable of perceiving and appreciating this magic from the cheapest,]flimsiest nickel-sized speaker in a cheap cell phone. Maybe some people who are satisfied with the sound from their cell phone just have vivid pattern-recognition skills.
Finally, as to the ad hominem posts, you should be ashamed.