@whipsaw - I am making some assumptions here on the exact implementation of Denafrips' designs, since they have never really published a detailed description.
As I understand it, the input data is clocked into the FIFO using the source device's clock (embedded in the SPDIF signal), and clocked out of the FIFO using the DAC's internal clock.
If the clocks are different enough in frequency, eventually the FIFO will overflow (if the source device's clock is faster) or underflow (if the source clock is slower). So, yes, CISCO's explanation is basically correct.
I'm not an expert on modern DAC implementation, but I believe that many DACs (particularly lower priced DACs) use a phase-locked loop to adjust the DAC's clock frequency to match the source clock. They may also use a FIFO so the PLL can be slow responding to minimize jitter. But a PLL will still not be as stable and jitter free as a fixed crystal oscillator (particularly an temperature controlled oscillator).
I suspect the reason that the problem was less common (or non-existent) with the Terminator and T+ is that the clock used in these DACs is higher quality (more accurate) than the clocks used in the lower priced models, and these DACs probably also tend to be used with higher quality sources (which have more accurate clocks). I have not heard that the FIFO implementation is any different, although it's possible that larger FIFOs were used.
I also suspect that many higher end DACs use a similar approach to Denafrips and just did a better job with the FIFO management (perhaps using the same approach that Denafrips is now using).