Audioengr,
Can you explain to me how re-clocking is done in an accurate manner?
I understand the concept of "garbage-in, garbage-out" in most information systems meaning that once information is lost or corrupted, it cannot in general be restored back to its initial correct state. It can be massaged perhaps to be better than it might be otherwise, but it will never be the same as it was before the corruption occurred in most cases.
So in the case of jitter, once the clocking of the data is hosed, how practically does re-clocking it make it right again or at least better. What algorithm is used? Is it that the correct clocking is just implicit in the sample rate of the bitstream assuming all the original bits are transmitted? If so, why bother clocking data in these crazy digital audio systems in the first place?
Thanks for any light you can help shed for me on this area that I continue to struggle with understanding clearly.
Can you explain to me how re-clocking is done in an accurate manner?
I understand the concept of "garbage-in, garbage-out" in most information systems meaning that once information is lost or corrupted, it cannot in general be restored back to its initial correct state. It can be massaged perhaps to be better than it might be otherwise, but it will never be the same as it was before the corruption occurred in most cases.
So in the case of jitter, once the clocking of the data is hosed, how practically does re-clocking it make it right again or at least better. What algorithm is used? Is it that the correct clocking is just implicit in the sample rate of the bitstream assuming all the original bits are transmitted? If so, why bother clocking data in these crazy digital audio systems in the first place?
Thanks for any light you can help shed for me on this area that I continue to struggle with understanding clearly.