Talk but not walk?


Hi Guys

This isn't meant to start a fight, but it is important to on lookers. As a qualifier, I have my own audio forum where we report on audio issues as we empirically test them. It helps us short cut on theories and developing methods of listening. We have a wide range of systems and they are all over the world adding their experiences to the mix. Some are engineers, some are artist and others are audiophiles both new and old. One question I am almost always asked while I am visiting other forums, from some of my members and also members of the forum I am visiting is, why do so many HEA hobbyist talk theory without any, or very limited, empirical testing or experience?

I have been around empirical testing labs since I was a kid, and one thing that is certain is, you can always tell if someone is talking without walking. Right now on this forum there are easily 20 threads going on where folks are talking theory and there is absolutely no doubt to any of us who have actually done the testing needed, that the guy talking has never done the actual empirical testing themselves. I've seen this happen with HEA reviewers and designers and a ton of hobbyist. My question is this, why?

You would think that this hobby would be about listening and experience, so why are there so many myths created and why, in this hobby in particular, do people claim they know something without ever experimenting or being part of a team of empirical science folks. It's not that hard to setup a real empirical testing ground, so why don't we see this happen?

I'm not asking for peoples credentials, and I'm not asking to be trolled, I'm simply asking why talk and not walk? In many ways HEA is on pause while the rest of audio innovation is moving forward. I'm also not asking you guys to defend HEA, we've all heard it been there done it. What I'm asking is a very simple question in a hobby that is suppose to be based on "doing", why fake it?

thanks, be polite

Michael Green

www.michaelgreenaudio.net


128x128michaelgreenaudio
jf47t,


I am not attempting to be sarcastic. I, and I am not the only one, see room for improvement.

https://www.unlv.edu/english/academic-programs/mfa-creative-writing

Also, BIOL 613

https://catalog.unlv.edu/content.php?catoid=20&navoid=3709


EDIT: This one may be a better start. Course description (Course 1101, English 1) seems like a perfect match.

http://www.caslv.org/sandy-ridge-hs-course-summaries/

EDIT 2: It is The Match. Electives include...

Fine Art

Creative Writing

Jazz Band

Music Production

Debate and Speech 1 & 2

Graphics/Website Design


and last, but not least...

Introduction to Photoshop





I clicked on the link to the case study posted earlier today and got a message something to the effect server not found.
Unusually high solar flare activity. Server is being tuned from outer space right now.
glupson
At the same time, all of the above points may be true for any reviewer. Michael Green, audio magazines, me, anyone. Nobody should have the right to say she/he is better than the other one. My approach in such an even situation is that I will trust my ears more than someone’s who has significant investment in the problem. I may be wrong, but so may others.

>>>>Huh? Of course they’re true for everyone, including reviewers. I’m not singling you out. Geez. Are you pretending to be thick again? The long list of reasons why tests can possibly go wrong means you can not (rpt not) believe negative results from anyone, including reviewers. That’s kind of the whole point. And as I said before positive results are more believable because they were positive despite the obstacles. Follow?
Positive results can be as deceptive as negative ones. That is why test evaluations almost always include false-positive "discussion", better to say calculation. Here, we are talking about matters of small differences and not clear black and white or on/off. Reviewer that hears the results may also be biased or his methods may be significantly flawed. That does not even take into account hallucinations that may be more common in certain people. All of those may be obstacles. That is why we are discussing it on an more-or-less anonymous and free-access Internet forum. Neither of our experiments and claims would have passed the first step (acceptance to be considered for review and possible publishing) at any publication worth anything. That is where writers have their theses used for mopping the floor. It has been very nice and polite approach here.

Both of our claims, Michael Green's and mine, are as valid as they get. The problem is that his are supposed to somehow be considered more valid with no real difference in our methods and mine are supposed to be "trolling". I asked a couple of times about Michael Green's pictures that someone had said are twenty-five years old. On those pictures, he has fairly long hair. If it ever goes over the ears, impact on the sound is probably much bigger than of the sun activity on that day. Just that difference between our hairstyles may make my findings more reliable.

Speaking of the activity, I am not sure how to interpret this, but someone may find out if this is the cause for that server failure...

https://www.spaceweatherlive.com/en/solar-activity/solar-flares