Much of the music we listen to today is recorded in studio's using multi-track recording equipment. Microphones are chosen by their ability to best capture the sound of the instrument or voice being recorded. It's important to note here that much of what timbre is is the harmonics of the instrument, in other words, the harmonic makeup of the instrument....some harmonics being louder than others but all reaching your ears at precisely the right time. Microphones do not hear like the ear hears. That too, is important to understand. A microphone consists of a very delicate diaphragm suspended in air that moves back and forth due to air pressure changes (sound waves). That diaphragm is connected to one of several different electromagnetic mechanisms that converts the motion of the diaphragm to an alternating electrical current. That current flows in a cable connected to the mixing console, and becomes the basis for the audio signal that we will process, record, and ultimately send to a loudspeaker, where it will be converted back to sound. The ear, on the other hand, is a very complicated device conceptually. It consists of a very delicate diaphragm suspended in air that moves back and forth due to air pressure changes (sound waves). That diaphragm is connected, via a fairly elaborate mechanical linkage, to a remarkable organ called the basilar membrane. At the basilar membrane, the mechanical motions are converted to neurological impulses that are sent to our brain. There, along with some other things, those impulses are presented to our conscious mind. In other words, the microphone converts sound into an analogous electrical waveform, while the ear converts it into neurological impulses. The microphone has just one input and one output. We have 2 ears. A big part of what goes on in the brain before the neurological information is presented to our consciousness is the integration of the data from both ears into a single illusion. Each basilar membrane has about 30,000 outputs! Those 30,000 or so nerve endings are spread out across the membrane, so that each nerve ending ends up representing a different frequency, sort of. This is how we can discriminate pitch and harmonies. Visualize the microphone with a filter that divides the incoming signal into 30,000 different sine waves and transmits the loudness (and phase of low frequency signals) of each such sine wave down a separate cable to the console! Visualising that are we? Another important issue has to do with localization -- the ability to discriminate which direction a sound is coming from. The microphone can't detect this at all, while the ear does in several interactive and highly complex ways. As sound enters the outer ear, tiny reflections of the sound bouncing off the pinna (the flap of skin surrounding the ear canal) recombine with the direct signal to create very complex and distinctive interference patterns (comb filtering in the range between 5 and 15 KHz.). Each different angle of arrival of a sound yields its own distinctive and audible pattern, and the brain uses these (actually it happens at the basilar membrane and in the auditory nerve on the way to the brain) to determine which direction any sound element is coming from, from each individual ear. So yes, it matters. I sometimes wonder how many audiophiles have never lived with a time/phase accurate speaker. Like I said in my previous post, once you do, you won't go back to high order crossovers.