What an interesting thread. I've othen wondered if there would be value in SB/DB testing and why it isn't done more often. It seems to me that the reason for such tests is twofold, firstly to identify on an objective basis specific differences between components - eg transparency, ambiance, soundstage, resolution, focus, brightness, transient attack, clarity, etc; and secondly which component gives greater musicality over a period of time eg which gives the best pleasure, which is nearest to the original recording, which is nearest to live sound. The problem is that everyone hears sound differently. That's why some like the sound of the Festival Hall, and can't bear the sound of the same orchestra playing the same piece under the same conductor, in the Albert Hall (I live in the UK). I suggest the success or failiure of SB/DB testing would depend on its its ability to allow the listener to consistently pick out the component which gave them the greatest pleasure, or produced the specific type of ambiance, transparency, etc. that they wanted.
Most people can tell the difference between listening to components placed on a bog standard shelf in the living room, or on specifically designed audio stands such as the Sistrum, or Townshend: whether they prefer one over the others is dependent on how they hear sound. It's the same with cable burn-in: I defy anyone with normal hearing not to agree there is a difference between a fully burnt-in cable and a virgen cable. But that difference may be totally unimportant to them.
So maybe BS/DB testing at an individual level would have value, if one had the time and the money to spend on it. Otherwise, perhaps the knowledge - gained by quick A/B comparision, reviews, and most importantly people's opinions on forms such as this one - that component A has slightly more of what we want in terms of amibiance, transparency, etc; is about as far as we can reasonably expect to go, and very probably good enough for all but the most "golden-eared" of us.