1. Can a 1 be mistaken for a 0 at the usb receiver if the waveform is degraded enough?
It can get complicated. For CD, their is error correction. If a byte comes in with error, and if the error is not catastrophic, the error can be corrected. But if the error is catastrophic enough that cannot be correct, the next layer of defense is the missing data will be interpolated. Because the music data stored in the CD is interleaved, interpolated is possible. For example, if there are 20 consecutive bytes of error, they do not comprised a continuous stream of musical data, because of interleaved, the 20 bytes come from different segment of music. Based on CD spec, you could potentially drill a small hole on the surface of the CD and it can still work. Most of the differences you hear from different CD player probably not because of error because of power supply design, noise, the output stage, and of course JITTER.
I am not quite familiar with USB protocol but I CANNOT imagine that it would not have error correction and interpolation, and that the data is not interleaved. If USB has all this characteristic, I don’t see how what I said above would not apply to USB as well. Again, I don’t think most audible differences you here is from bit error because it is just a small percentage of all the variables. Now if you hear scratchy sound or spikes like a damage cd, well that’s different.