11-24-13: CerrotI usually try to avoid ideological debates about whether one design approach is inherently better or worse than another, because the answer is usually (and perhaps almost always) that it comes down to the quality of the specific implementation, as well as system matching and listener preference.
There is no usb on the planet that has lower jitter than an spdif output-it is impossible. Remember, USB transmits data in packets, not streams, which is why it is so poor. Music should be transmitted in data streams, not packets. The whole USB to spdif converter thing is a sham. They cost anywhere from $200 to $2,000 and none of them do it as good as not doing it at all.
11-26-13: Audioengr
ASYNCHRONOUS [emphasis added] USB on the other hand generates a new master clock and ignores the clock from the computer, therefore the jitter on the USB cable is of no consequence. It is ignored. This is because the Async USB interface is the MASTER and asks the computer for data packets only when needed. These packets are put into a buffer, which is clocked out using the local free-running low-jitter Master Clock.
In this case though, in addition to seconding Steve's comment (which I agree with completely) I want to add that S/PDIF and AES/EBU are "streaming" formats only in the loosest possible sense, and not in any sense that necessarily implies an advantage with respect to jitter.
A true data stream consists of an unbroken string of 1 and 0 data (emphasis on "data," as opposed to other information), usually represented by voltages, and accompanied by a separate clock and other timing signals. S/PDIF and AES/EBU signals are nothing like that. There are subframes, frames, blocks, preambles, status bits, bits used for error detection, etc., etc. And further, all of that is multiplexed (combined) with timing information (i.e., the clock) via something called differential Manchester or biphase mark encoding, which allows clock and data to be combined into a single signal.
The receiving component has to sort all of that out, extracting the clock from that single signal, and processing both the data and the non-data information appropriately. And, particularly if the source component is a computer, it will have to do all of that in the presence of what will inevitably be a good deal of jitter-inducing digital noise.
While there are design approaches that are used in SOME DACs that are largely immune to jitter when receiving S/PDIF or AES/EBU signals, such as ASRC (Asynchronous Sample Rate Conversion), those approaches arguably have some significant downsides. Packetized protocols, on the other hand, inherently utilize a clock for the DAC chip itself (which is the place where jitter matters) that is different than the clock that is used to communicate the data between components. There are no downsides to that approach that I can conceive of, other than quality of implementation.
Regards,
-- Al