It’s a good question, Paul. The ones and zeros are intended to represent SOMETHING. It might be something specific, like the letter “A“, or it might be something much more nuanced, such as the distance, speed and acceleration of a speaker cone at a precise moment in time. Now, the reason Word works consistently on any two different computers is because we (the collective “we”) have decided that it’s IMPORTANT, in a word processor, that letters not be confused with each other. So important that a system was developed, called ASCII, that has been universally accepted for use in word processors. ASCII tells us that an “A” will always be represented by the same sequence of ones and zeros. And if a computer couldn’t get that right, Microsoft would be out of business.
But music is a very different beast, because you cannot listen to a note of music and tell whether it differs from another note of music as easily as you can tell that the letter “A” your word processor spits out is different from the letter B. Yes, I know, audio has its own standards too, like Redbook. If we want to reproduce the sound of an orchestra for one second, the Redbook standard tells us that we need 44,000 bytes of data. But if one of those bytes has a single flipped bit, I’d wager that even @millercarbon would not be able to tell the difference; consequently, it is not as IMPORTANT to a normal listener as it would be if he or she were typing a letter on a keyboard, and a different letter showed up on the screen. Most listeners are willing to put up with a few errors in the translation of their music from digital to analog, as long as they can’t hear the difference. But some peculiar people, called audiophiles, think it’s worth spending a lot of money to try to hear those differences, and will go to great lengths to ensure themselves that their reproduction of those 44,000 bytes is as error-free as technologically possible. You can think of it like a word processor for someone with a really bad case of OCD. Throw in the additional complexity of how two different humans prefer to hear the same musical piece on their ridiculously expensive systems, and you have a real hot mess, called “Audiogon”. Hope that helps.
But music is a very different beast, because you cannot listen to a note of music and tell whether it differs from another note of music as easily as you can tell that the letter “A” your word processor spits out is different from the letter B. Yes, I know, audio has its own standards too, like Redbook. If we want to reproduce the sound of an orchestra for one second, the Redbook standard tells us that we need 44,000 bytes of data. But if one of those bytes has a single flipped bit, I’d wager that even @millercarbon would not be able to tell the difference; consequently, it is not as IMPORTANT to a normal listener as it would be if he or she were typing a letter on a keyboard, and a different letter showed up on the screen. Most listeners are willing to put up with a few errors in the translation of their music from digital to analog, as long as they can’t hear the difference. But some peculiar people, called audiophiles, think it’s worth spending a lot of money to try to hear those differences, and will go to great lengths to ensure themselves that their reproduction of those 44,000 bytes is as error-free as technologically possible. You can think of it like a word processor for someone with a really bad case of OCD. Throw in the additional complexity of how two different humans prefer to hear the same musical piece on their ridiculously expensive systems, and you have a real hot mess, called “Audiogon”. Hope that helps.