Blind Shoot-out in San Diego -- 5 CD Players


On Saturday, February 24, a few members of the San Diego, Los Angeles and Palm Springs audio communities conducted a blind shoot-out at the home of one of the members of the San Diego Music and Audio Guild. The five CD Players selected for evaluation were: 1) a Resolution Audio Opus 21 (modified by Great Northern Sound), 2) the dcs standalone player, 3) a Meridian 808 Signature, 4) a EMM Labs Signature configuration (CDSD/DCC2 combo), and 5) an APL NWO 2.5T (the 2.5T is a 2.5 featuring a redesigned tube output stage and other improvements).

The ground rules for the shoot-out specified that two randomly draw players would be compared head-to-head, and the winner would then be compared against the next randomly drawn player, until only one unit survived (the so-called King-of-the-Hill method). One of our most knowledgeable members would set up each of the two competing pairs behind a curtain, adjust for volume, etc. and would not participate in the voting. Alex Peychev was the only manufacturer present, and he agreed to express no opinion until the completion of the formal process, and he also did not participate in the voting. The five of us who did the voting did so by an immediate and simultaneous show of hands after each pairing after each selection. Two pieces of well-recorded classical music on Red Book CDs were chosen because they offered a range of instrumental and vocal sonic charactistics. And since each participant voted for each piece separately, there was a total of 10 votes up for grabs at each head-to-head audition. Finally, although we all took informal notes, there was no attempt at detailed analysis recorded -- just the raw vote tally.

And now for the results:

In pairing number 1, the dcs won handily over the modified Opus 21, 9 votes to 1.

In pairing number 2, the dcs again came out on top, this time against the Meridian 808, 9 votes to 1.

In pairing number 3, the Meitner Signature was preferred over the dcs, by a closer but consistent margin (we repeated some of the head-to-head tests at the requests of the participants). The vote was 6 to 4.

Finally, in pairing number 5, the APL 2.5T bested the Meitner, 7 votes to 3.

In the interest of configuration consistance, all these auditions involved the use of a power regenerator supplying power to each of the players and involved going through a pre-amp.

This concluded the blind portion of the shoot-out. All expressed the view that the comparisons had been fairly conducted, and that even though one of the comparisons was close, the rankings overall represented a true consensus of the group's feelings.

Thereafter, without the use blind listening, we tried certain variations at the request of various of the particiapans. These involved the Meitner and the APL units exclusively, and may be summarized as follows:

First, when the APL 2.5T was removed from the power regenerator and plugged into the wall, its performance improved significantly. (Alex attributed this to the fact that the 2.5T features a linear power supply). When the Meitner unit(which utilizes a switching power supply) was plugged into the wall, its sonics deteriorated, and so it was restored to the power regenerator.

Second, when we auditioned a limited number of SACDs, the performance on both units was even better, but the improvement on the APL was unanimously felt to be dramatic.
The group concluded we had just experienced "an SACD blowout".

The above concludes the agreed-to results on the blind shoot-out. What follows is an overview of my own personal assessment of the qualitative differences I observed in the top three performers.

First of all the dcs and the Meitner are both clearly state of the art players. That the dcs scored as well as it did in its standalone implementation is in my opinion very significant. And for those of us who have auditioned prior implementations of the Meitner in previous shoot-outs, this unit is truly at the top of its game, and although it was close, had the edge on the dcs. Both the dcs and the Meitner showed all the traits one would expect on a Class A player -- excellent tonality, imaging, soundstaging, bass extension, transparency, resolution, delineation, etc.

But from my point of view, the APL 2.5T had all of the above, plus two deminsions that I feel make it truly unique. First of all, the life-like quality of the tonality across the spectrum was spot-on on all forms of instruments and voice. An second, and more difficult to describe, I had the uncany feeling that I was in the presence of real music -- lots or "air", spatial cues, etc. that simply add up to a sense of realism that I have never experienced before. When I closed my eyes, I truly felt that I was in the room with live music. What can I say.

Obviously, I invite others of the participants to express their views on-line.

Pete

petewatt
Metralla - I can only account for the blinded comparisons in SD since I could not attend the LA session. Our approach is as you indicated. Throughout most our listening evaluations, Alex stayed out of the dedicated listening room, which was set up only for the 5 voters. He often waited in the adjacent kitchen area and he was to provide no comment or any vocal expressions until the blind phase of the comparisons were complete. He occasionally peeked to have a listen, but was still out of the line of site of the voters.

We confidently proceeded primarily due to the following logistical details:
1) blind comparison format in which the voters did not know which player was being used,
2) assignment of letter IDs for each player used and these are not revealed until the completion of the comparisons
3) immediate voting process via simple show of hands (without discussion) after each track per pairing.

Alex, or any other manufacturer, who is willing to put his product up against the very best and allow others to try and objectively evaluate it is always welcome. As an fyi, we recently hosted Raul Iruegas. He presented his Essential 3150, a superb full-function preamp to our audio club and stayed longer to allow club members to evaluate it in their systems. Like Alex and Nick Doshi, Raul wears many hats as designer, manufacturer, distributor and dealer. Because of the latter two roles he is solely responsible for showing his products. We thank Raul for providing the chance for us to try to objectively evaluate it against our own preamps and phonostages, and we are equally grateful to Alex for giving us the listening opportunity with his player.

I appreciate your understanding of this and taking the time to comment on these comparisons.
Ctm_cra, You guys apparently spent a lot of time eliminating as much of the varibles involved in blind A/B testing, but I remain curious about a couple of things I have always felt might affect the outcome.

The first issue is stereo imaging. One of the hall marks of great 2 channel stereo systems is its ability to convey with absolute accuracy the information in the source recording. Nearfield listening, within the parameters of the system set up requirements and room possibilities, is the most revealing in this respect. (Other set ups for far field, more reverberant sounds, like omidirectional or bi polar speakers may sound 'wonderful' but are not necessarily accurate or reproducable in other environments.

My first question - How can five folks hear the same sound at the same time? Only one can sit in the sweet spot and we all know that listening off the sweet spot may be good but I doubt that anyone will consider it accurate. Or do you feel that stereo imaging capabilities of the digital devise, or the set up, is not relevant?

The next question has to do with short term perceptions that are based on high frequency information. That is, can you tell when the sound of the higher frequencies are more detailed due to 1) A slight mid-range recession, 2) A slight elevation of the high frequencies, 3)Shortening of the decay time of the signal (imparts a fast sound and a clarity due to the shopping off of the trailing edge of the signal, or 4) The excellence of the sound is simply the absence of any distortions what so ever.

IMHO a slight increase, or clarity, in high frequency information can have a very audible effect in stereo imaging, but the reason for the apparent increase is very important. If its for any reason other than increased clarity its likely to induce some fatigue factor in long term listening sessions.

The question - how can you resolve these issues in short A/B listening with any assurance that the sound that you find attractive under such conditions will survive long term listening under controlled conditions?

Am I missing something here? Are the assumptions leading to my questions off base?
When I do sighted A/B tests at home - which I very rarely do, because I find longer (usually measured in hours or days) experience with a component much more reliable - I always do A/B/A/B; or AA/BB/AA/BB; or A/B/B/A, etc. I do this because the second time I hear a piece of music I notice (or "hear") more than I did the first time. This can result in a bias for the second component. I realize that this mixing of the order was probably not possible to do, given the logistics involved in this very well-thought-out test. I'm just adding my thoughts here to the discussion; i.e., not arguing a point of any kind.
Jfz - Nice approach you have. We did a mixture of this actually.

During round 1 we learned a number of things. Important among these was the fact that the voters (although they could not see the players) could tell the one being used because they can see which one received the CD we wanted hear. So we decided to mix things up as detailed in my previous response to Tbg. Although we kept evaluating the choral track before the orchestral recording, we mixed up which player started each pairing and THIS WAS DONE FOR EACH TRACK. Thus, for any pairing being evaluated we did not necessarily begin with the same player for the NEXT CD used. After round 1, the voters did not know which player was playing at any given time.

I wanted to address your comment about hearing more the subsequent times you listen to a recording. I agree and some would also say that their focus changes during the second or third tries, when compared to the first time they hear a recording. There was really no way to address this equally for all voters. Three of the five voters know the Rutter piece very well as we have used it in previous evaluations. Only two of the voters know the Bernstein recording. We felt it was important for each voter to have many opportunities to hear each track so they can confidently cast their votes. Here is a little more detail on our listening process for EACH TRACK...

1) With the correct input selector and volume levels set, we loaded both players, one with the test CD, the other with a dummy CD. We then listened to a predetermined point on player A. For the Rutter piece this was at the 2:08 mark and at 2:44 for the Bernstein piece.
2) Rewind and listen again but for a shorter time. Up to 1:13 for Rutter and 1:38 for Bernstein.
3) We asked is any voter needed additional listening and, if so, we would repeat 2) above
4) Unload both players, switch CDs (the location of the one being tested is not revealed/visible to voters)
5) Reload both CDs and select the appropriate line input of the preamp and make the necessary volume adjustments as predetermined by the level matching done during the set up for both players.
6) Repeat 1, 2 and 3 above for player B.
7) We asked if any voters wanted to go back to player A, and we would repeat steps 4, 5, 1, 2, and 3 for everyone. This option was done only for one pairing – the DCS vs. the Meitner.
8) Immediately vote with show of hands (no discussions)
9) Repeat the process for the next recording.

One more thing to note, for subsequent pairings and even in between each pairing, we also switched the input selector to which the players were connected on the preamp. Even though we were assured by its manufacturer that Line 1 is identical to Line 2 in every way (materials/parts as well as specs), we wanted to vary this too, just in case ;-)
Newbee – You ask some important questions. All five voters sat in the same seats throughout all comparisons, so the perspective of each listener never changed.

Certainly none of the side positions is as revealing as the sweet spot, which can accommodate two people -- one in front of the other. However, we spent a considerable amount of time in advance of the event to position the other seats so that acceptably focused images and a convincing soundstage is perceived from the other three positions -- two on either side of the sweet spot in the back row and one to the side in the front row. None of the voters raised the concern about the lack of image focus or not being able to hear dimensional details. Later when discussions were allowed, the three voters sitting at the side positions were surprised they were able to tell each player’s ability to present a focused image and believable soundstage. These side perspectives may not be correct, but it was the best we could do given our time constraints.

I do not know if the other voters prefer nearfield listening. However, I know they can easily recognize good, focused imaging and excellent soundstaging when they hear it. However, stereo imaging and soundstaging capabilities, although important, are only two of the many criteria each voter had to keep in mind as when they listened. In fact we did not prediscuss or define these sonic parameters. We simply asked each voter to listen, compare and honestly and confidently cast a vote as to which player they liked in each pairing.

The speakers used are neither dipoles nor horns. The drivers are not horn loaded, do not use ribbon tweeters, and has excellent off axis response. It is appropriate at this point to provide the room dimensions (hopefully this info partially addresses other member’s curiosities):

width - 12 feet
length - 15 feet (see * more info)
front row - ~9.5 feet diagonal from front of each speaker
back row - ~11.5 feet diagonal from the front of each speaker

The wall behind the seats has a central window, which is covered with 2 layers of fairly thick curtains. The floor is wool carpeted with foam insulation underneath and this is supplemented by another 6x8 ft area rug on top. The cement foundation is underneath the carpet. Spikes are used at the rear of each speaker to couple them to the foundation. A single Finite Elemente Cerapuc is used in front of each speaker as a vibration control treatment.

*There is no wall behind the speakers and this contributes greatly to this system's superb imaging/soundstaging. The lack of wall behind the speakers also partially contributes to a flat FR response measurement from the sitting position. The room node interactions are negligible at +1.5 dB at 80Hz and flat at nearby frequencies. My very first post includes details of the very good in-room, from-the-listening-position SPL measurements. These data also clearly detail that there are no “slight mid-range recession” or a “slight elevation of the high frequencies”.

I do not know if there is “shortening of the decay time of the signal (imparts a fast sound and a clarity due to the shopping off of the trailing edge of the signal”. Please pardon my obvious lack of technical knowledge, but I can only guess this is more perceived than measured, yes? This system has never been described as fast, slow or muddy. Besides its superb imaging, soundstaging and layering/delineation capabilities, it is also dynamic and articulate, while also having a tonal balance that results in a believable representation of real instruments or voices. These along with the system’s overall musicality and resolving ability are the reasons why we keep using it for our comparisons.

I cannot confirm “the excellence of the sound is simply the absence of any distortions”. We never measured this system in this regard so we have no meaningful information to share. Suffice it to say that there is distortion (what system doesn’t), but none of its symptoms have ever noticeably/audibly surfaced. We’ve used other systems in the past so this is not the only one with which we have experience. However, during last four years we’ve done comparisons using this system, no one has ever commented something that would lead us to investigate if distortions are an issue.

As to the type of listening fatigue I think you described, not one of the voters mentioned or commented anything that had to do with system edginess or harshness. Another member already raised concern about careful AB comparisons for 6 hours. We took plenty of breaks in the kitchen and family room areas, while the set-up, level matching, and blinding was being done for each pairing. Fatigue of a different kind eventually set in. We would have kept going were it not for one voter having to leave, another voter needing to join his family and the others wanted to go out and get steak ;-)