I quite don't understand the technical rationale other than maintaining certain impedance
At high frequencies (much higher than audio frequencies), what are known as transmission line effects come into play, for electrical signals travelling through cables. One of those effects is that if the impedance of the cable, the connector, and the load (destination) device are not precisely the same (and they never are), some fraction (usually a small fraction) of the incoming energy will be reflected back toward the source (instead of being absorbed by the load).
When that reflection arrives at the source, it will again encounter an imperfect impedance match, and so some small fraction of it will be re-reflected back to the original destination.
The length of the cable affects the amount of time that is required for that two-way round-trip. When that re-reflection arrives at the load, it (or most of it, the part that is not re-reflected once again) will sum together with the original waveform, resulting in small but significant distortion of the original waveform.
With a digital signal that is used for clocking, as well as to convey data, what is important is that whichever edge of the signal is being used by the destination device for clocking (by "edge" I mean a transition from either low to high or high to low, i.e., 0 to 1 or 1 to 0, and actually some applications use both edges) are as "clean" and undistorted as possible, or else jitter results (meaning small fluctuations in the timing of the clock period). Typically the middle area of a transition edge is what is responded to by the destination device, so the cable length should be such that the re-reflection does not arrive at that time. That time, in turn, will depend on the risetime (or falltime) of the edge (the time it requires to transition from high to low or low to high). Quoting from myself in the thread I linked to above:
If the input impedance of the dac and the impedance of the cable don't match precisely, a portion of the incident signal would be reflected back to the transport. A portion of that reflection would then re-reflect from the transport to the dac. The two-way reflection path, assuming propagation time of roughly 2 nanoseconds per foot, would be 12ns for the 1m cable, and 18ns for the 1.5m cable.
I don't know what the typical risetimes/edge rates are for transport outputs, but it does seem very conceivable that the extra 6ns could move the arrival time of the re-reflection sufficiently away from the middle area of the edge of the original incident waveform so that it would not be responded to by the digital receiver at the dac input.
Hope that clarifies more than it confuses!
Regards,
-- Al