Have we lost civility and respect on Audio forums?


I think we have.  I have seen many discussion on audio forums and how nasty they can become when you have people disagreeing. Seems like there are a lot more know it alls now. I been in 20 years and I can still learn.  But I also know I know quite a bit. Like cables can enhance the sound and higher end well designed gear can truly be ear candy special.  Is this just on audio forums or the internet period. 

calvinj

By the way i already answered to that partly with a scientific article about aural memory i posted for you somewhere above ...

A.M. is distributed in the brain on many levels and layers in many different parts of the brain ... Guess why ?

Aural memory is a complex phenomenon , not a simple retrieval measured set of bits stored on a disc...

In human the gesturing body/brain produce sounds as much as much as he perceive and memorize them as meaningful , we assimilated then aural qualities as felt created events and interpreted aural meanings with all the body and associate sounds with gestures and other perceptions and conditions in the environment ...

It is why there is more about sound perceptive qualities detection, interpretation,memorization and retrieval than mere measures in Hertz and decibel detected by the ears/brain and verified in a double blind test ...

Anyway ... as you said you cannot made head and tails of my posts and you tuned out ..😊

I am too stupid or too bright for you , or the two at the same time ... I dont know ..

Now to understand why aural memory is a complex problem not as simple as your blind test experiments, read this article and go back to me and explain to me what this means ... Then we will discuss science not blind testing of audiophiles ...

https://phys.org/news/2013-02-human-fourier-uncertainty-principle.html

 

«Phys.org)—For the first time, physicists have found that humans can discriminate a sound’s frequency (related to a note’s pitch) and timing (whether a note comes before or after another note) more than 10 times better than the limit imposed by the Fourier uncertainty principle. ....

.........................................................................

New sound models

The results have implications for how we understand the way that the brain processes sound, a question that has interested scientists for a long time. In the early 1970s, scientists found hints that human hearing could violate the uncertainty principle, but the scientific understanding and technical capabilities were not advanced enough to enable a thorough investigation. As a result, most of today’s sound analysis models are based on old theories that may now be revisited in order to capture the precision of human hearing.

 

"In seminars, I like demonstrating how much information is conveyed in sound by playing the sound from the scene in Casablanca where Ilsa pleads, "Play it once, Sam," Sam feigns ignorance, Ilsa insists," Magnasco said. "You can recognize the text being spoken, but you can also recognize the volume of the utterance, the emotional stance of both speakers, the identity of the speakers including the speaker’s accent (Ingrid’s faint Swedish, though her character is Norwegian, which I am told Norwegians can distinguish; Sam’s AAVE [African American Vernacular English]), the distance to the speaker (Ilsa whispers but she’s closer, Sam loudly feigns ignorance but he’s in the back), the position of the speaker (in your house you know when someone’s calling you from another room, in which room they are!), the orientation of the speaker (looking at you or away from you), an impression of the room (large, small, carpeted).

"The issue is that many fields, both basic and commercial, in sound analysis try to reconstruct only one of these, and for that they may use crude models of early hearing that transmit enough information for their purposes. But the problem is that when your analysis is a pipeline, whatever information is lost on a given stage can never be recovered later. So if you try to do very fancy analysis of, let’s say, vocal inflections of a lyric soprano, you just cannot do it with cruder models."

By ruling out many of the simpler models of auditory processing, the new results may help guide researchers to identify the true mechanism that underlies human auditory hyperacuity. Understanding this mechanism could have wide-ranging applications in areas such as speech recognition; sound analysis and processing; and radar, sonar, and radio astronomy.

"You could use fancier methods in radar or sonar to try to analyze details beyond uncertainty, since you control the pinging waveform; in fact, bats do," Magnasco said.

Building on the current results, the researchers are now investigating how human hearing is more finely tuned toward natural sounds, and also studying the temporal factor in hearing.

"Such increases in performance cannot occur in general without some assumptions," Magnasco said. "For instance, if you’re testing accuracy vs. resolution, you need to assume all signals are well separated. We have indications that the hearing system is highly attuned to the sounds you actually hear in nature, as opposed to abstract time-series; this comes under the rubric of ’ecological theories of perception’ in which you try to understand the space of natural objects being analyzed in an ecologically relevant setting, and has been hugely successful in vision. Many sounds in nature are produced by an abrupt transfer of energy followed by slow, damped decay, and hence have broken time-reversal symmetry. We just tested that subjects do much better in discriminating timing and frequency in the forward version than in the time-reversed version (manuscript submitted). Therefore the nervous system uses specific information on the physics of sound production to extract information from the sensory stream.

"We are also studying with these same methods the notion of simultaneity of sounds. If we’re listening to a flute-piano piece, we will have a distinct perception if the flute ’arrives late’ into a phrase and lags the piano, even though flute and piano produce extended sounds, much longer than the accuracy with which we perceive their alignment. In general, for many sounds we have a clear idea of one single ’time’ associated to the sound, many times, in our minds, having to do with what action we would take to generate the sound ourselves (strike, blow, etc)." »

 

We also know, based on a large body of research that humans can not accurately compare an aural memory to real time sound. Not even close. And we know through other studies that it is a cause for humans, all humans, to misidentify differences in sound where none exist.

What is really dumb is to think that they continually miss real audible phenomena that is only detected by audiophiles under non controlled conditions.

It almost sounds like you’re living with absolute certainty that we know everything there is to know.

It is worse than that. He insists that he knows all there is to know:

I know, as much as it can be known that basic cables are audibly transparent ... I know as much as something can be known that power cords make no actual difference ...

It’s not possible to have a real conversation with someone who believes they know it all.

EDIT - And then look at what he just claims literally in the post below:

I never claimed to be any kind of expert either.

He’s a troll, folks.

“Dude, I came from a different engineering discipline (nothing to do with audio) where folks could die if we make a coupla innocent li’l mistakes. Hence, extreme levels of rigor was required and we couldn’t afford to do any kind of fake parade like you.”

 

wow you are so cool. I am in awe…

”In fact, i was hinting on some phenomena we’ve studied in another discipline for other applications (nothing do do with audio), which clearly should have some implication for audio. Some apparently celebrated speaker designers i’ve spoken to had never heard of it (got real glazy eyed when i brought it up).”

what you misidentified as a Doppler effect is wave interference. It’s about as basic as it gets. If there really are speaker designers that are unaware of it stay the hell away from their products. It’s pretty basic stuff in speaker design. It doesn’t just happen with sound in the ultrasonic range. Nor does it just happen with tones that are close in frequency. It’s an issue with multi-driver integration. The overlapping wave forms can do the same thing. It’s also something we live with, well most audiophiles live with because of the interaction between two speakers. It’s called comb filtering. Again this is audio 101. Jeez it used to be how people tuned guitars.

“I am guessing internet warrior Scott had never heard of it either. But, here he is, pretending to be the instantaneous expert.”

and you would be guessing wrong. I never claimed to be any kind of expert either. But if knowing about wave interactions where you set the bar…

I understood by your own post claims that you are an expert in cable transparency and an advocate of blind tests as the main solution to all subjectivist audiophiles superstitions ... No ?

 😊

 

 

I never claimed to be any kind of expert either.

“Dude, I came from a different engineering discipline (nothing to do with audio) where folks could die if we make a coupla innocent li’l mistakes. Hence, extreme levels of rigor was required and we couldn’t afford to do any kind of fake parade like you.”

 

wow you are so cool. I am in awe…

”In fact, i was hinting on some phenomena we’ve studied in another discipline for other applications (nothing do do with audio), which clearly should have some implication for audio. Some apparently celebrated speaker designers i’ve spoken to had never heard of it (got real glazy eyed when i brought it up).”

what you misidentified as a Doppler effect is wave interference. It’s about as basic as it gets. If there really are speaker designers that are unaware of it stay the hell away from their products. It’s pretty basic stuff in speaker design. It doesn’t just happen with sound in the ultrasonic range. Nor does it just happen with tones that are close in frequency. It’s an issue with multi-driver integration. The overlapping wave forms can do the same thing. It’s also something we live with, well most audiophiles live with because of the interaction between two speakers. It’s called comb filtering. Again this is audio 101. Jeez it used to be how people tuned guitars.

“I am guessing internet warrior Scott had never heard of it either. But, here he is, pretending to be the instantaneous expert.”

and you would be guessing wrong. I never claimed to be any kind of expert either. But if knowing about wave interactions where you set the bar…

Ah yes, indeed....Scott’s got Chat GPT and some AI support these days at his fingertips...He will do a ctrl C, ctrl V, paste some crap and there he will be on a forum, looking like an instantaneous genius for the poor old Audiogon crowd.


So, how do we catch Scott’s fake parade boys?

Hmmm, aha....i suppose we could ask him to develop a concurrent run CFD model feeding its output into a FEM tool for modeling some events in an acoustic chamber, for instance. We would like to study the damage susceptibility of some electronic components from vibration input derived from an acoustic envelope, typical of launch environments (in such a test chamber)..... Now, some masochistic aerospace PhD will cry all day, sweat blood for a year and come up with something. But, now... Scott’s bluff would get caught. Chat GPT would bail on Scott real quick there boys. 😁 bwaaahahahaha