The review wehave been promising is up


128x128audiotroy

I don’t necessarily disagree with sns and, as noted above, the magazine uses several platforms to let readers know what our favorite products are at all price points, both collectively (Editor’s Choice) and individually (Golden Ears.

See? This is the kind of BS logic TAS uses to justify not simply making appropriate comparisons in a review. We’re supposed to weed through Editor’s Choice and Golden Ears lists and then somehow gleen how the review product would compare despite the respective reviews being done in completely different rooms and in completely different systems? Gimme a break!!!

And references to specific competing components do make it into plenty of our reviews.

Uh, really? Do you even read your own magazine? I’d put it at no more than 10%(and that’s being generous) of TAS reviews that provide any kind of useful product comparisons.


But, to state the obvious, the ultimate "reference" for us is "the absolute sound"—live musical performance—and reviews that employ the descriptive language developed by TAS decades ago and that present details of a writer’s subjective experience can also help quite a bit in making a purchasing decision.

But what about recordings made in a studio and made to sound like studio recordings? Are they supposed to sound like live performances too? Are systems supposed to alter studio recordings to sound live the way YOU think the live performance should sound? Bogus! The fact is THERE IS NO ABSOLUTE SOUND except what the recording engineer laid down and how well a system recreates it in a listening room.

And that you’ve constructed some ancient mythical language that somehow is supposed to help a reader weed through a reviewer’s words to somehow magically understand how a product sounds based solely on the reviewer’s individual “subjective” impressions is absurd and precisely why direct product comparisons are so helpful. Many is the time when writing a review I thought I had a product’s sound nailed only to have at least some of my impressions shown to be partially or completely wrong upon substituting a competitive product. Had I written reviews based solely on my own “subjective” impressions almost all my reviews would’ve been incorrect or at least somewhat misleading to readers. That’s precisely why publications like Soundstage! REQUIRE a comparisons section in every review, and each reviewer needs to have a comparable component in their system or they don’t review the product. Product comparisons improve accuracy and usefulness of reviews to readers and holds reviewers (and the magazine) accountable for their observations, but we certainly can’t have any of that TAS world now can we? Plus, it’d involve so much more work and effort on the part of the reviewer meaning you couldn’t crank out as many reviews - oh the horror!

But @aquint by all means feel free to keep twisting yourself in knots trying to defend and justify TAS’ outdated and relatively ineffective review policies. As someone mentioned above, in a world where quality audio dealers are few and far between people rely on product reviews now more than ever and thus need ACCURATE AND ROBUST reviews to help them make purchase decisions, and flowery rhetoric waxing poetic about what a reviewer “thinks” they hear without any stated checks and balances is basically useless and self-important drivel.

 

Well Troy you a vanishing breed, a functioning physical store.  Unfortunately I live in the Midwest so shopping with in person isn’t going to happen.  Btw, I am at home now with Covid…third bout…and I am vaxxed up the wazoo, so please be careful.

And I defend your right to make claims about products you sell, and I wish that you may thrive and other B&M stores do the same.  Regarding your specific product claims, I have no opinion 

+1 @soix

fishies swimming in the kool-aid pitcher are by definition drinking it... just the way it is...

This is the kind of dialogue that needs to happen and I always try to attend to constructive criticism from editors and readers alike. It’s just so much more productive when the correspondent adopts the tone of a Mahler123 as opposed to the enraged contempt of a soix. It’s always a bad sign when someone starts writing in all caps about what’s supposed to be an enjoyable avocation. Both posters are making the same points but one seems to be getting unnecessarily bellicose about it. Like it or not, all of us— knowledgeable consumers, dealers, manufacturers, journalists, recording professionals, even artists—are part of the same ecosystem and—I’ll use the provocative "c-word"—a little civility goes a long way.

I’m not certain how having one or two comparison products on hand makes everything right, when there are dozens of potential competing products. Chances are that the comparison product I have on hand isn’t going to be the one you’re interested in. Even if one goes to a lot of audio shows, nobody’s heard everything—and certainly not at length in a familiar system. Is it simply the act of comparing the product under review to something that’s a virtue? Seems kind of arbitrary to me.

That’s why the lexicon that HP and others developed can be so helpful. Employed thoughtfully, it can serve as a point of reference that individual product reviews can point to. It’s a lot like reading music criticism: You can learn the tastes, biases, and priorities of a given writer you don’t even often agree with and use a review of his to predict what your own response will be—so long as their listening habits and predilections are constant and consistent.

But as the more technically advanced and expensive the products under consideration get, the harder it is to declare winners and losers, and the more critical it becomes that a potential purchaser get either a good long listen at a dealer or, best of all, the option to return gear after an extended home audition, no questions asked, that Dave Lalin provides.

That said, going forward, I’m promising myself to name more names in the course of my equipment reviews. Thanks to all who brought it up.

Andrew when you or your fellow music reviewers review, say, Wagners Ring, there are usually comparisons to multiple other versions.  As readers we have come to take that as a matter of course.  So why the attitude about product reviews, that comparison are not your bailiwick?  I wouldn’t expect Product X to be compared to every product out there, no more than I would expect a new Beethoven Symphony recording to be compared to the other 200 available versions.  However, most recordings are compared to a few others.  Why the different standard for gear?