A Question About Time Alignment


I was reading a review of the Wilson Alex V on Stereophile recently. (Published just in time. I’m thinking about picking up a pair. Maybe a couple for the bedroom, too.) And it raised a long-standing question of mine, one that I hope the wiser minds on this site can answer. 
 

Wilson’s big selling point is aligning the different frequencies so they all reach your ear simultaneously. As I understand it, that’s why they have minute adjustments among the various drivers. The woofers put out bass notes that move slowly thanks to their long sound waves while the tweeters are playing faster moving, high frequency notes with short waves. Wilson lets you make adjustments so that they all arrive at the ear at once. 
 

It seems to me, however, that live music isn’t time aligned. Suppose I’m playing the piano and you’re sitting across the room. When I stretch out my left hand to hit the low notes, those notes travel along the same long, slow wavelengths as the notes from Wilson’s woofers. Similarly, the treble notes I play with my right hand move quickly through the short wavelengths. The notes from the piano are naturally out of alignment. If Wilson’s goal is to achieve a lifelike sound, aligning the frequencies doesn’t seem like the way to do it. 
 

Wilson has been selling lots of zillion dollar speakers for lots of years and people continue to gobble ‘em up. Something must be wrong with my line of reasoning. Would someone please point out where I’ve gone wrong? Nicely?

paul6001

You are correct, all frequencies being played travel at different speeds to your ears

BS

they travel at very different speed so the higher sounds need to be delayed and the lowest played first

BS

This is like high school science stuff.

If it it did work like you say, then the lightening bolt would a crack and woofer notes arriving later. But roll of thunder is more likely stuff we cannot see happening above the clouds.

Please don’t be rude and call science BS because you don’t believe in it. These are measurable results that I can produce which you can hear and isn’t BS in any way.  Your example with thunder and lightening is a great practical high school example. Thunder is caused by the lightening strike by expanding air around the bolt. Light travels at the speed of 186,291 miles per second and sound travels at 1,088 feet per second (depending on the air temp) so if you count the number of seconds from when you see the lightening bolt to the sound you hear and divide by 5 that is roughly how many miles the strike is from you. If there is no time between the strike and the sound find cover! Science my save your life in a thunderstorm and it can help reproduce the best recording reproduction you will ever hear. I learned a lot from this site and feel it is important to respect all views on the forum. If you have measurable scientific data that I’m full of BS I’d love to research it but I’m afraid the entire study of Physics and my ears disagree with you. 
 

Thanks,

Steve

@hifidream Steve, your post was great except for the velocity being a function of the frequency.

I can see how one might think that the velocity is a function of frquency as t links like this show v = freg * wavelength

 

let’s just round off the speed to 1 foot/millisecond.
And at a 1000 feet away the 1Hz takes a second to travel it.
what amount of time does 20Hz take to arrive?
And what amount of time does 20kHz take to arrive?

I have no doubt you MiniDSP sound good, and I suspect that the tweeter to woofer delays are on the order or inches or fractions of a millisecond.

 

if the speed was a function of frequency the the lightening crack would sound like a descending chirp from 20KHz down to 20Hz.
But it doesn’t, we see the lightening and we hear the crack a bit later. When it is so close that it is almost simultaneous our we also tend to crap our britches.

I tried to read every post here but too many have confused light waves with sound waves, light travels as a electromagnetic wave composed of photons, all electromagnetic waves travel at the same speed which is the speed of light.

Sound waves are a vibration, does not contain any photons, the vibration alternately compresses and decompresses the air particles next to it, creating a wave that is composed of compressions and refractions. Sound cannot therefore travel through space where there are no particles to create a sound wave.

Everything affects sound when it is in the air there is no free ride it is not a electromagnetic wave...

I copied and pasted the following for those of you in need of quick refresher:

You can calculate the wavelengths of audible sound in air. Audible sounds in air have frequencies that range from roughly 20 Hz to 20 kHz. Not surprisingly, the wavelengths of audible sounds also vary widely. Assuming a speed of sound of 340 m/s,

For 20 Hz sound in air: λ=vf=340m/s20Hz=17m

 

For 20 kHz sound in air: λ=vf=340m/s20,000Hz=0.017m=1.7cm

 

This calculation shows that wavelengths of sounds in air are distinctly human sized. The wavelengths range from roughly the diameter of a dime (for the highest frequencies) to roughly the length of a city bus (for the lowest frequencies). For comparison, the wavelengths of visible light are all far smaller than the thickness of a single human hair and have a very narrow range (from roughly 400 to 700 nm).

 

  1. The shorter wavelength sound has the higher frequency. Both sounds travel at the same speed.
  2. When the sound goes from cooler to warmer air, it's speed increases (because sound travels faster in warmer air). The frequency doesn’t change (unless the source changes). Since speed increases and frequency is unchanged, the wavelength must increase. Increasing the number for wave speed in the equation λ=v/f
  1. without changing the number for frequency will lead to a bigger value for wavelength.

 

 

 

Time delay is something that your brain uses to locate sounds in space. We are extremely sensitive to it. We can perceive fractions of a second. The reason we know that a sound comes from right center is because our ears and brain are sensitive enough to detect the difference in the timing of the sound hitting our right ear and then traveling across our face to our left ear. If time alignment didn’t matter than we couldn’t locate sound in space. The reason we aren’t completely fooled that we are listening to a live performance is because our left ears are picking up what the right speaker is playing and the right ear is picking up what the left speaker is playing. That is why highly directional speakers can often make a spooky 3-D image if set up and corrected properly but that sweet spot is extremely small (like move your head fractions of an inch small). That is why time alignment and room correction are both important to mitigate room nulls to get the best image possible. There is a ton of disinformation here and I spent years sifting through the chaff to create a system based on factual measurable scientific results. If anyone is struggling to figure out why things just don’t quite sound right the answer is probably in this realm and it is often overlooked because time alignment and room correction are perceived as difficult to do and frankly not many people have heard a system set up correctly employing these techniques. This forum is about sharing ideas. People put great weight into small physical changes in their systems. It would behoove them to respect the power time alignment and room correction can truly make as well. Perhaps a user will find this insightful, easy to understand, and seek to better their system because of it. That’s why I’m still posting from time to time even if I get trolled by someone who doesn’t believe in science. 
 

Thanks,

Steve