Kiwi, I suppose if I want to clutter my post with un-ending qualifiers, I should have said "drastically reduces jitter" instead of "eliminates." Still, if you take the time to check the spectrum analyses of several Stereophile test reports, you'll see that the asynchronous USB DACs have very low jitter that is barely visible in the graphs.
Second, I didn't know that reasonably priced CD players were speed-reading and buffering the data first. Can you name some? Since the audio press is making a big deal about the just-released $6,000 Parasound CDP featuring this read-and-buffer feature, it didn't seem that it had gone mainstream yet.
Third, your summary of what computers are for is narrow and dismissive. I landed in Silicon Valley in 1980 and worked in high tech computers from then until the end of 2006. Word processing applications were a relative latecomer. Also, when the industry switched to graphics-based interface and displays, the desktop-publishing apps were some of the most CPU-intensive applications available, right up there with finite element analysis and solids modeling.
Along with that, my MacBook Pro has a 2 Ghz processor and 8 GB RAM, far more processing power than a typical--or even expensive--CD player or DAC. It has an aluminum housing (well-shielded), and no fan. AS I MENTIONED BEFORE, the Audirvana software (and several other packages) can be configured to turn off all CPU interruptions. It's called "hog mode." Look it up.
Once a music data file is buffered in RAM, the clock is reset. It doesn't matter how much jitter was in the stream before, for the moment, bits is just bits. At that point when it enters a new stream to the DAC, it's coming with a fresh and reset clock and is not subject to additional jitter coming from reading a wobbling plastic disc.
Finally, it was Ed Meitner of Museatex who discovered and published about jitter over 20 years ago. At that time listening tests revealed that jitter became audible around 200 ps. This presented a challenge to the industry as the most popular receiving chip at the time was only accurate to 20 ns.
And anyway, at some point arguing the numbers becomes a moot point. With standard CDPs not only did I find myself not enjoying the music, I even noticed that the family got more irritable when the music was playing. With my current computer setup, I can actually enjoy digitally-sourced music. I still play a lot of records and go to live concerts for reference, but the computer-based (especially high-res) digital playback is closing the gap.
Second, I didn't know that reasonably priced CD players were speed-reading and buffering the data first. Can you name some? Since the audio press is making a big deal about the just-released $6,000 Parasound CDP featuring this read-and-buffer feature, it didn't seem that it had gone mainstream yet.
Third, your summary of what computers are for is narrow and dismissive. I landed in Silicon Valley in 1980 and worked in high tech computers from then until the end of 2006. Word processing applications were a relative latecomer. Also, when the industry switched to graphics-based interface and displays, the desktop-publishing apps were some of the most CPU-intensive applications available, right up there with finite element analysis and solids modeling.
Along with that, my MacBook Pro has a 2 Ghz processor and 8 GB RAM, far more processing power than a typical--or even expensive--CD player or DAC. It has an aluminum housing (well-shielded), and no fan. AS I MENTIONED BEFORE, the Audirvana software (and several other packages) can be configured to turn off all CPU interruptions. It's called "hog mode." Look it up.
Once a music data file is buffered in RAM, the clock is reset. It doesn't matter how much jitter was in the stream before, for the moment, bits is just bits. At that point when it enters a new stream to the DAC, it's coming with a fresh and reset clock and is not subject to additional jitter coming from reading a wobbling plastic disc.
Finally, it was Ed Meitner of Museatex who discovered and published about jitter over 20 years ago. At that time listening tests revealed that jitter became audible around 200 ps. This presented a challenge to the industry as the most popular receiving chip at the time was only accurate to 20 ns.
And anyway, at some point arguing the numbers becomes a moot point. With standard CDPs not only did I find myself not enjoying the music, I even noticed that the family got more irritable when the music was playing. With my current computer setup, I can actually enjoy digitally-sourced music. I still play a lot of records and go to live concerts for reference, but the computer-based (especially high-res) digital playback is closing the gap.