Digital XLR vs. Analog XLR - Balanced Cables


What is the difference between a digital XLR/balanced cable and an analog XLR/balanced cable?

What if I used an analog XLR/Balanced cable to carry a digital signal from the digital output of one device to the digital input of another device?

Any risks/damage, etc. . .
ckoffend
The temporary cables that I have tried, to confirm that I am getting the full 24/192 upsampling sound very good. As I previously mentioned, to upconvert to this level, it is required that I used two balanced digital cables to handle the full bandwidth. DCS indicates that it is impossible to transmit this amount of bandwidth on either a Coax digital cable, a glass optical digital cable or a single balanced AES/EBU cable. I don't necessarily question this, but before spending a several hundred dollars per cable, I wanted to be sure that the two appropriate cables would actually deliver the required upsamping level I wanted to hear/test. Since I have plenty of balanced analog cables, I was just seeking a reasonable and fast opportunity to test.
I don't know why DCS want to transfer signal on two cables (Benchmark uses one for 24bit/192kHz) but jitter rejection properties might make cable discussion irrelevant.
Ckoffend-

You can try them; they should work. You have nothing to lose.

OTOH, I would be more certain that digital cables would work for analog.

Kal
The only thing about cables that is going to make any difference as to jitter is whether the cable has sufficient bandwidth so as not to cause jitter due to the inherent characteristics of the data signal itself. Even then there should be correction circuitry at the converter stage to correct for any transmission induced jitter. Likely, the analog and digital versions of the XLR cables both exceed the bandwidth necessary to avoid transmission induced jitter.
Here is the quote from Stereophile article "A Transport of Delight: CD Transport Jitter"

"While we're on the subject of the digital interface, I should point out that the engineering for transmitting wide-bandwidth signals was worked out nearly 50 years ago in the video world. In video transmission, the source has a carefully controlled output impedance, the cable and connectors have a precisely specified characteristic impedance and are well-shielded, and the load impedance is specified within narrow tolerances. If these practices aren't followed, reflections are created in the transmission line that play havoc with video signals. This issue is so crucial that a whole field called Time Delay Reflectometry (TDR) exists to analyze reflections in transmission lines.

The audio community should adopt the standard engineering practices of video engineering for digital interfaces. This means designing transports with a carefully controlled 75 ohm output impedance, precisely specified characteristic impedance of the cable (75 ohms with a narrow tolerance), and junking RCA connectors in favor of true 75 ohm BNC connectors. By applying standard video engineering techniques—in use for decades—the high-end product designer can greatly improve the performance of the transport/processor interface. We've seen what happens with a poorly implemented interface with the SV-3700 and different cables: higher jitter in the recovered clock and degraded sound quality. The engineering needed to optimize the digital interface is readily available. Let's use it."

As far as I know bandwith of the cable determines losses in the cable in dB/ft while jitter is strictly property of mismatched characteristic impedance (SQRT(L/C)). Antena/video 75 ohm cables might have different losses (RG59, RG6, RG11 etc) but won't create reflections as long as they have exact 75 ohm. Please correct me if I'm wrong.