What were your own blind cable test results?


Did you ever conduct a blind speaker cable test yourself? Please share your experiences, results, and the gear level associated with your test. For example, test conducted with cable types: DIY, Lo/Mid/High end, Components: Lo-Fi, Mid-Fi, High End-Fi. Feel free to elaborate on your gears if you like.

Please note that this is not a debate on whether DIY, or cheaper cable makes a big difference with high end cables. Nor about snake oils, etc.

I'll start first, a buddy of mine and I did a recent test on our Mid-Fi system with 5 cables, 1 Home Depot, 1 DIY, 3 Mid End cables from various cable Co. After 2 hours of listening and swapping cables, our results - it was very difficult to tell. The longer you listen, the more fused the music becomes, perhaps of listening fatigue. However, we were able to pick out one branded cable consistently as it has a 'flattening' effect on the music in our system, funny that this cable contains the most high-tech approach. As far as the other four cables, it was very difficult to discern the difference. This exercise helped us to weed out the one that we dislike the most, and enjoy the music with the others.
springowl
Post removed 
I tried a blind test once and tripped over the cat, hit my head on my amp, broke my big toe, spilled my drink on the turntable and landed on my sleeping dog....I dont try to blind test anymore:)
A while ago, I had a system of Sony 707ES, Classe CP35, Pass Labs Aleph 3, Spica Angelus, and Radio Shack 14 Ga wire. I brought some MIT Terminator 3 (I think) home, and htought it sounded much better. Not willing to believe it (why should speaker cable sound different), I got my sister to listen, and she instantly picked out the MIT as far superior. I had a couple other brands on hand, which fell in between the MIT and Rad Shack.
You hit upon two of the main difficulties with this type of test from a practical standpoint: You need a partner -- blind testing can't be done by oneself -- and comparing multiple unknowns blind (whether actually involving several different variables, or an A/B/X test using only two) can be more fatiguing/confusing than it is sonically revealing. Personally I don't think the potential pitfall with sighted testing is so much that it makes listeners hear differences that aren't really there (as long as you take your time and perform multiple trials), as much as it might *sometimes* tend to lead to unduly influenced conclusions about which perceived differences actually equate with 'better'.