In simple terms? I am asking for trouble (again) here, but here goes...
I believe the key thing better cables do better is that they minimise time-domain errors - specifically phase errors and time-smearing. I am a cable manufacturer and our designs are focused on these factors. This issue is not readily understood by audiophiles as we tend to perceive time-domain errors as if they were tonal errors, such as a cable that measures dead flat being perceived as bright because it smears the upper mid-range or lower treble.
Even though the scope for time-domain errors is very small in cables, I believe the ear/brain system is incredibly sensitive to the time-domain information in what it hears. This is because the ear/brain system actively processes what it hears in order to make sense of it, but that it has problems when it encounters time-domain errors that do not occur naturally. This is why measurements will never identify what which of two good products will sound better - even if we developed a measure for total time-domain errors, it wouldn't distinguish between those the brain can easily process and those that are harder.
Almost all aspects of cable design affect time-domain errors. Conductor material, conductor purity, conductor crystal structure, dielectric material, vibration control, geometry, connectors, how the connectors are connected to the wire, connector surface, connector mass, connector tightness.
And as with any other component, the performance of a cable is not defined by simply the conductor material, or the dielectric, or some bias current applied to the wires - not just by the technology employed but, just as importantly, by its implementation. Which means, just as with other components, you have to hear them, you can't decide based on the technologies used.