Electrical/mechanical representation of instruments and space


Help, I'm stuck at the juncture of physics, mechanics, electricity, psycho-acoustics, and the magic of music.

I understand that the distinctive sound of a note played by an instrument consists of a fundamental frequency plus a particular combination of overtones in varying amplitudes and the combination can be graphed as a particular, nuanced  two-dimensional waveform shape.  Then you add a second instrument playing, say, a third above the note of the other instrument, and it's unique waveform shape represents that instrument's sound.  When I'm in the room with both instruments, I hear two instruments because my ear (rather two ears, separated by the width of my head) can discern that there are two sound sources.  But let's think about recording those sounds with a single microphone.  The microphone's diaphragm moves and converts changes in air pressure to an electrical signal.  The microphone is hearing a single set of air pressure changes, consisting of a single, combined wave from both instruments.  And the air pressure changes occur in two domains, frequency and amplitude (sure, it's a very complicated interaction, but still capable of being graphed in two dimensions). Now we record the sound, converting it to electrical energy, stored in some analog or digital format.  Next, we play it back, converting the stored information to electrical and then mechanical energy, manipulating the air pressure in my listening room (let's play it in mono from a single full-range speaker for simplicity).  How can a single waveform, emanating from a single point source, convey the sound of two instruments, maybe even in a convincing 3D space?  The speaker conveys amplitude and frequency only, right?  So, what is it about amplitude or frequency that carries spatial information for two instruments/sound sources?  And of course, that is the simplest example I can design.  How does a single mechanical system, transmitting only variations in amplitude and frequency, convey an entire orchestra and choir as separate sound sources, each with it's unique tonal character?  And then add to that the waveforms of reflected sounds that create a sense of space and position for each of the many sound sources?

77jovian
A few things to consider here:

  1. You get placement information of instruments via two methods, volume and timing. If the right side is louder for a particular instrument, then that is where you perceive the instrument is coming from. That is the mechanism for placement of sounds that are continuous. The other mechanism is arrival time (between the left and right ear). That is used for transient sounds. A clap will arrive to one ear slightly before the other. Your auditory processing system is able to time that to pretty fine resolution and give you an idea of where it came from. Some posit that arrival time also plays a factor in placement of continuous sounds.
  2. The electromechanical system only stores and transmits sound waves (pressure variation). It does not transmit instruments, etc. It is your brain, knowing what a grouping of sounds mean, that is able to extract instruments and place them.
A single speaker will not convey on its own, any sense of space, but a room may create those cues (accurate or not), and your brain doesn't like an information vacuum so it will try to map what it hears onto what it knows.

I think you meant frequency and time domain. Amplitude is part of either frequency or time domain.
77jovian commits an all too common flaw in logic which since no one studied logic no one caught. Except me, of course.

Hint: "two ears"-
When I'm in the room with both instruments, I hear two instruments because my ear (rather two ears, separated by the width of my head)

Two ears. Got it?

Then, inexplicably:
But let's think about recording those sounds with a single microphone.

Wait- what?!?! 

Need I say more? Really?

Yeah, yeah. I could answer the one mic question too. But fix the first one first, okay?
Um, first, some instruments don't have a lot of energy in the fundamental.

But otherwise, you may be very interested in Head Related Transfer Functions.

Best,
E
@77jovian You may find the following writeup to be instructive. (Coincidentally, btw, as you had also done it uses the example of a flute for illustrative purposes):

http://newt.phys.unsw.edu.au/jw/sound.spectrum.html

Note particularly the figure in the section entitled "Spectra and Harmonics," which depicts the spectrum of a note being played by a flute.

To provide context, a continuous pure sine wave at a single frequency (which is something that cannot be generated by a musical instrument) would appear on this graph as a single very thin vertical line, at a point on the horizontal axis corresponding to the frequency of the sine wave.

The left-most vertical line in the graph (at 400 Hz) represents the "fundamental frequency" of the note being played by the flute. The vertical lines to its right represent the harmonics. The raggedy stuff at lower levels represents the broadband components I referred to earlier. Note this statement in the writeup:

... the spectrum is a continuous, non-zero line, so there is acoustic power at virtually all frequencies. In the case of the flute, this is the breathy or windy sound that is an important part of the characteristic sound of the instrument. In these examples, this broad band component in the spectrum is much weaker than the harmonic components. We shall concentrate below on the harmonic components, but the broad band components are important, too.

Now if a second instrument were playing at the same time, the combined spectrum of the two sounds at a given instant would look like what is shown in the figure for the flute, plus a number of additional vertical lines corresponding to the fundamental and harmonics of the second instrument, with an additional broadband component that is generated by the second instrument summed in. ("Summed" in this case refers to something more complex than simple addition, since timing and phase angles are involved; perhaps "combined" would be a better choice of words). And since when we hear those two instruments in person our hearing mechanisms can interpret that complex spectrum as coming from two different instruments, to the extent that information is captured, preserved, and reproduced accurately in the recording and playback processes our hearing mechanisms will do the same when we hear it in our listening room.

Best regards,
-- Al

So, I’ll ask again, how is the audio signal in cables and electronics affected by external forces such as RF and vibration as well as by better cables? And what IS the audio signal? Anybody! Is it electrons? Photons? Current? Voltage? An electromagnetic wave? Something else? That’s really what the OP is talking about. Don’t be shy!