For what it's worth, I am told by my high-frequency specialist friends and colleagues that it was well known to the scientists working in high-frequency laboratories that new cables "settled in" after a while. I am unaware of any "cable cookers" used in the "serious" radio industry. It seems, though, that the cables had to be used in the very application in which they were settled in. In other words, you don't get a high frequency cable to work well if you let it settle in as a power cable for a while. Has to be the same high frequency application.
I don't know if this stuff was ever published, but it was known and talked about according to people who used to work in the field. I don't know if it was measurable, though. I'd have to ask about this -- highly INTERESTING!
My present theory is that it is but simple degaussing that is going on -- nothing else. You take magnetic domains and keep making them smaller and smaller. That's what degaussing is all about. It is done by taking a signal and making a "fade-out" out of it. Similar to what the procedure used to be with the old CRT monitors, when you pressed the DEGAUSS function.
Now, when you look at a music signal (or any sound signal, for that matter), it is all a bunch of fade-outs. That's what echo and reverberation and all the tails of all the percussive sounds are.
I have taken this theory and practiced with it over the years.
The result was that 100 short fade-outs all the way down to zero of approximately 10 seconds each sounds worse that one very large fade-out going from full power to zero in 100 x 10 seconds.
I have not yet been able to discern any other improvement in quality once the fade-out reached 7 days. In other words, I could not hear a difference between a cable processed with 7 days of non-stop single fade-out compared to another equally new cable processed (faded out on it) for 10 days non-stop.
Who has had any other tangible results and methods?
Louis Motek
I don't know if this stuff was ever published, but it was known and talked about according to people who used to work in the field. I don't know if it was measurable, though. I'd have to ask about this -- highly INTERESTING!
My present theory is that it is but simple degaussing that is going on -- nothing else. You take magnetic domains and keep making them smaller and smaller. That's what degaussing is all about. It is done by taking a signal and making a "fade-out" out of it. Similar to what the procedure used to be with the old CRT monitors, when you pressed the DEGAUSS function.
Now, when you look at a music signal (or any sound signal, for that matter), it is all a bunch of fade-outs. That's what echo and reverberation and all the tails of all the percussive sounds are.
I have taken this theory and practiced with it over the years.
The result was that 100 short fade-outs all the way down to zero of approximately 10 seconds each sounds worse that one very large fade-out going from full power to zero in 100 x 10 seconds.
I have not yet been able to discern any other improvement in quality once the fade-out reached 7 days. In other words, I could not hear a difference between a cable processed with 7 days of non-stop single fade-out compared to another equally new cable processed (faded out on it) for 10 days non-stop.
Who has had any other tangible results and methods?
Louis Motek