@markwd - why do you think I care about your opinion?
- ...
- 96 posts total
@mikhailark Because of all the rich commentary I bring to the discussion thread! There's null testing via software analysis and the reminder that multitone measurements do rule out some of the ambiguity around the specter of single tone testing. There's my always helpful reminder that it is important to show rather than just telling and how that is a critical part of our modern technological society. There are my insights on philosophical ideas dating back to Socrates (at least). But mainly so that others can pick up on these diverse opinions and insights and make good decisions about resources and ideas related to our shared love of music and the technology of music reproduction! Sooooo coooool! |
Bruno Putzey is obviously a brilliant circuit designer, but he’s not quite right when he says "the ear is not a spectrum analyzer." Maybe he hasn’t looked into hearing and auditory perception as much as he’s looked into circuit design. When a complex wave -- like the sound of music -- reaches our cochlea (in the form of waves in a viscous fluid propagate down the cochlea), different frequency components of that complex wave maximally vibrate different physical locations along the basilar membrane running through the cochlea, attached to those specific points of the basilar membrane are inner ear hair cells that fire in response to that particular frequency component because it is that component that moves them. Additionally, the nerve firing driven by the hair cells is phase locked to the signal (at least for signals below 5kHz) -- the nerves fire at the same point in the frequency’s wave over and over. So in fact, our ears take an incoming complex signal, break it down into component frequency parts, track the frequency both via the timing of its cycle and the degree of BM displacement, and our brains do comparisons of the data to make determinations about how to perceive the sound.... we compare interaural time and level differences, and phase differences of the sounds as they reflect off our left and right pinnae to assign location; we decide which components should be heard together as a fused tone with a timbre and which tones don’t belong to that and so are heard as something separate (we use lots of info for that including learned knowledge of what X instrument sounds like, what sound components start and stop more or less together, what sound components are behaving continuously and which ones are discontinuous with those, the location of each of these individual components, etc.) The ear and brain work very much by breaking down a complex sound pressure wave into component parts and analyzing them. It just then goes a step further and concocts and auditory perception out of the data it collects, and that is what we hear. There actually is another explanation -- not just unconscious bias or difference in the stimulus -- for differences of people’s experience listening to the same equipment. Frequency following response studies -- in which electrodes on the skull of people track the brain activity of people doing normal listening to sounds (and the electrical signal they output, remarkably can be played back and resembles the original sound) -- show that even for individuals with clinically normal brains and ears, each person has an individually different FFRs to the same stimulus, and the differences remain consistent for each individual relative to others over time. Our brains each actually are "hearing" something slightly different. We also know from FFR studies and other kinds of studies that, for example, speakers of tonal languages have different FFR pitch consistency responses than speakers of non tonal languages; that the descending auditory pathway (which sends brain signals too the inner ear and seems to play a role in the active gain control and frequency selectivity of the ear’s outer hair cells) functions a little differently in trained musicians than non-musicians; heck, we even know that FFR pitch stability (having the same FFR response over and over to the same pitch) is worse in children growing up in poverty with poorly educated parents -- that is, science shows that for the same stimulus, individuals have different brain responses and that while some of those responses are biomechanical (women on average have smaller cochlea than men, making their basilar membranes stiffer and giving them gender average differences in hearing response than men), many involve things that are learned and conditioned or are behavioral (differences in attentive hearing vs inattentive hearing). That’s all before we get to biasing factors like knowing X costs 3 times more than Y, or the impact of other senses, like sight, on auditory perception, which also have very real and substantial impacts. And then too before we get to how we develop cognitive models for preference. What sounds "natural" to any one of us in something that we know is not natural -- a recording -- is a complex psychoacoustic construction that a measurement of the stimulus alone can’t explain, but which also many not correspond to another person’s psychoacoustic construction of "natural sounding" or even another person’s auditory experience as track by differences in brain activity through FFR. So, to bring it back to D/S digital reconstruction filters, a lot of people prefer, say, a minimum phase reconstruction filter in a D/S DAC, some people might like these megatap filters, some people like an apodizing filters, some people might even like a sharp linear phase filter right at Nyquist (many people might not even be able to perceive a difference), and we absolutely can known and measure the response of each of those filters, and design them accordingly (you can play around with them yourself if you like with something like HQ player feeding a NOS DAC). What we can’t measure, at least not directly, is individual preference or average general preference. We need to measure those things indirectly through controlled, single variable listening with a variety of test subjects representing the whole range of listeners to have any kind of sense of those things. It’s not that we can’t measure the sound -- and I don’t think Bruno is saying we can’t measure the filters, just that what he and his team think sound best aren’t necessarily the classically idea filters. It’s not that inexplicable magic is required. Its that people hear and are sensitive to different things in a given sound, leaving product developers with choices to make: do you make something that measures correctly, to you make something that sounds good to you, or do you make something that sounds good to most people according to studies and focus group data about group preference? Fortunately, most of the time, things line up -- people on average in lab tests seem to have a preference for full flat frequency response and low distortion in speakers, but even speakers these days are commonly built not for flat anechoic response but to comport with a predict preference curve combining on and off axis response build on Floyd Toole’s research. It’s euphonic, it’s not accurate, but it is measurable and being designed for not just through accident or trial and error. In this hobby there are obviously disparities in preference, in ideas of what fake things sound "natural," just like there are disparities in the music we listen to, in the type and quality of the recordings we listen to, and definitely in our home listening acoustic set ups. I really don’t think it’s a matter of the bench tests and other kind of tests missing information that relevant to the sound the equipment is producing. I think it’s just that auditory perception and sound preferences vary among individuals. |
Well, null tests are common enough with music signal, loop back testing too. And noise is used in testing things like DAC filter performance. Noise as a test signal is common enough in addition to both individual frequency testing and frequency sweep testing (which is going to be better at showing you the spectrum of harmonic distortion than what you'll be able to glean from noise). Noise is challenging for some of these tests -- like you can't can't measure SNR with noise obviously, with DAC if you have random noise like white noise you can wind up with randomly occuring overs, I guess. And of course it's really not so much a signal more like music, I mean, music doesn't have anything like random frequency and constant sound power across all frequencies, unless you're listening to something like Merzbow or something. |
I’m just noting that our hearing in fact does work in some ways that are analogous to a FT, in that our ears and brains break down an incoming complex wave into it’s component discrete frequencies. Our ears and brains don’t seem to have to flip between frequency and time domains, so that's a substantial difference in kind, we seem to be able to process both simultaneously by processing information from the location on the cochlea that is activated and the timing pattern of the neural firing so activated -- at least up to about 4kHz or 5 kHz above which our neural ability to phase lock to the signal breaks down, our perception of pitch starts to break down, and our ability to resolve timing with respect to frequency becomes less precise and depends on information we can glean from other biological processes. But like anything else, our ears and brains are definitely far from infinite in resolution, highly non-linear even in the frequencies and spls and time increments that we can resolve, and limited in precision too. |
- 96 posts total