After reading the writeup on the "SuperTubeClock," and examining the waveform photos it provides after going to the URLs of the photos themselves (where they are presented in an easier to see form), I don’t consider this writeup to be BS or the SuperTubeClock to be snake oil. While the writeup raises some questions in my mind which are left unanswered/unexplained (see below), and although I certainly find a lot of other audio marketing literature to be BS from a technical standpoint and a turnoff when it comes to considering the product for purchase, I don’t in this case.
AFAIK the noise in a timing signal should be superfluous since it has two values 1 and 0, and anything in between (noise) is ignored. If the clock signal "noise" is leaking into the final analog output, then there are big problems with the DAC chip.
That is not correct. Noise on the clock signal applied to a DAC chip will result in short-term random or pseudo-random fluctuations in **when** transitions between 1 and 0 and/or 0 and 1 are sensed by the chip, causing the timing fluctuations that are referred to as jitter. (Generally speaking those transitions are sensed in the vicinity of their mid-point, i.e., roughly half-way between the two voltage states of the clock signal). And generally speaking that is widely recognized as being a significant issue in digital audio, to a greater or lesser extent depending on the particular design of course. It is not a matter of the noise "leaking through," it is a matter of the effects of the noise on timing.
In
a thread a few months ago one of our technically astute members, @Kijanki, explained it this way:
Let me try to explain jitter. Imagine you play 1kHz sinewave recorded on your CD. Digital words of changing amplitude, representing sinewave, are converted in even intervals into analog values by D/A converter. You get analog 1kHz sinewave.
Now imagine that these time intervals are not exactly even, but are getting shorter and longer 50 times a second. Now you won’t get only 1kHz sinewave but also other frequencies, mainly 950Hz and 1050Hz called "sidebands". Distance from the main (root) frequency depends on the frequency of the interval change (jitter), while their amplitude is proportional to amount of interval change. These new sidebands have very small amplitude, but are not harmonically related to root frequency (1kHz) and that makes them still audible.
With many frequencies (music) there will be many sidebands - practically a noise added to music. Sidebands have small amplitude that is proportional to amplitude of the signal. This noise stops (is not detectable) when music stops playing. You can only hear it as lack of clarity in the music (since something was added).
Things the writeup leaves unexplained include the following:
1) **How** is the sine wave produced by the tube converted to a square wave? Presumably that is done by solid state devices, which may or may not introduce significant amounts of noise themselves and cause some amount of jitter regardless of how clean the signal provided by the tube may be. It would be nice if some indication of **overall** jitter performance were indicated.
2) Waveforms are shown for 8.4672 MHz and 42.2 MHz. What rate is actually being used in the D/A conversion? While the writeup and the photo for 42.2 MHz indicate that risetimes and falltimes of the square wave are about 1.67 ns, a corresponding figure is not presented for the 8.4672 MHz case. And it appears based on the waveform photo that those times are about 4 or 5 ns, considerably slower than at the higher speed.
In any event, I wouldn’t consider this writeup to be something that would dissuade me from considering this DAC, if I had a need for a new one. I certainly can’t say as much about a lot of other marketing literature I have seen from other manufacturers.
Regards,
-- Al