In theory, I like the idea of double blind testing, but it has some limitations as others have already discussed. Why not play with some other forms of evaluating equipment?
My first inclination would be to create a set of categories; such as dynamics, rythm and pace, range, detail, etc.. You could have a group of people listen and rate according to these attributes on a scale of perhaps 1 to 5. You could improve the data by having the participants not talk to one another before completing their ratings, by hiding the equipment from them during the audition, and by giving them a reference audition where pre-determined ratings are provided from which the rater could pivot up or down across the attributes.
Yet another improvement would be to take each rating category and pre-define its attributes. For example, ratings for "detail" as a category could be pre-defined as: 1. I can't even differentiate the instruments and everything sounds like a single tone. 2. I can make out different instruments, but they don't sound natural and I cannot hear their subtle sounds or noises. 3. Instruments are well differentiated and I can hear individual details such as fingers on the fret boards and the sound of the bow on the violin string. Well, you get the picture. The idea is to pre-define a rating scale based on characteristics of the sound. Notice terms such as lush or analytical are absent because they don't themselves really define the attribute. They are subjective conclusions. Conceivably, a blend of categories and their attributes could communicate an analysis of the sound of a piece of equipment, setting aside our conflicting definitions about what sounds 'best', which is very subjective. Further, such a grid of attributes, when completed by a large number of people, could be statistically evaluated for consistency. Again, it wouldn't tell you whether the equipment is good or bad, but if a large number of people gave "detail" a rating of #2 and you had a low deviation around that rating, you might get a good idea of what that equipment sounds like and decide for yourself whether those attributes are desireable to you or not. Such a system would also, assuming their were enough participants over time, flush out the characteristics of equipment irrespective of what other equipment it was used with by relying upon a large volume of anecdotal evidence. In theory, the characteristics of a piece of equipment should remain consistent across setups or at least across similar price points.
Lastly, by moving toward a system of pre-defined judgements one could create some common language to rating attributes. Have you noticed that reviewers tend to use the same vocabularly whether evaluating a $500 piece of gear or a $20,000 piece of gear. So, the review becomes judgemental and loses its ability to really place the piece of gear in the spectrum of its possible attributes.
It's not a double blind study, but large doses of anecdotal evidence when statistically evaluated can yield good trend data.
Just an idea for discussion. If you made it this far, thanks for reading my rant :).
Jeff
My first inclination would be to create a set of categories; such as dynamics, rythm and pace, range, detail, etc.. You could have a group of people listen and rate according to these attributes on a scale of perhaps 1 to 5. You could improve the data by having the participants not talk to one another before completing their ratings, by hiding the equipment from them during the audition, and by giving them a reference audition where pre-determined ratings are provided from which the rater could pivot up or down across the attributes.
Yet another improvement would be to take each rating category and pre-define its attributes. For example, ratings for "detail" as a category could be pre-defined as: 1. I can't even differentiate the instruments and everything sounds like a single tone. 2. I can make out different instruments, but they don't sound natural and I cannot hear their subtle sounds or noises. 3. Instruments are well differentiated and I can hear individual details such as fingers on the fret boards and the sound of the bow on the violin string. Well, you get the picture. The idea is to pre-define a rating scale based on characteristics of the sound. Notice terms such as lush or analytical are absent because they don't themselves really define the attribute. They are subjective conclusions. Conceivably, a blend of categories and their attributes could communicate an analysis of the sound of a piece of equipment, setting aside our conflicting definitions about what sounds 'best', which is very subjective. Further, such a grid of attributes, when completed by a large number of people, could be statistically evaluated for consistency. Again, it wouldn't tell you whether the equipment is good or bad, but if a large number of people gave "detail" a rating of #2 and you had a low deviation around that rating, you might get a good idea of what that equipment sounds like and decide for yourself whether those attributes are desireable to you or not. Such a system would also, assuming their were enough participants over time, flush out the characteristics of equipment irrespective of what other equipment it was used with by relying upon a large volume of anecdotal evidence. In theory, the characteristics of a piece of equipment should remain consistent across setups or at least across similar price points.
Lastly, by moving toward a system of pre-defined judgements one could create some common language to rating attributes. Have you noticed that reviewers tend to use the same vocabularly whether evaluating a $500 piece of gear or a $20,000 piece of gear. So, the review becomes judgemental and loses its ability to really place the piece of gear in the spectrum of its possible attributes.
It's not a double blind study, but large doses of anecdotal evidence when statistically evaluated can yield good trend data.
Just an idea for discussion. If you made it this far, thanks for reading my rant :).
Jeff