What do we hear when we change the direction of a wire?


Douglas Self wrote a devastating article about audio anomalies back in 1988. With all the necessary knowledge and measuring tools, he did not detect any supposedly audible changes in the electrical signal. Self and his colleagues were sure that they had proved the absence of anomalies in audio, but over the past 30 years, audio anomalies have not disappeared anywhere, at the same time the authority of science in the field of audio has increasingly become questioned. It's hard to believe, but science still cannot clearly answer the question of what electricity is and what sound is! (see article by A.J.Essien).

For your information: to make sure that no potentially audible changes in the electrical signal occur when we apply any "audio magic" to our gear, no super equipment is needed. The smallest step-change in amplitude that can be detected by ear is about 0.3dB for a pure tone. In more realistic situations it is 0.5 to 1.0dB'". This is about a 10% change. (Harris J.D.). At medium volume, the voltage amplitude at the output of the amplifier is approximately 10 volts, which means that the smallest audible difference in sound will be noticeable when the output voltage changes to 1 volt. Such an error is impossible not to notice even using a conventional voltmeter, but Self and his colleagues performed much more accurate measurements, including ones made directly on the music signal using Baxandall subtraction technique - they found no error even at this highest level.

As a result, we are faced with an apparently unsolvable problem: those of us who do not hear the sound of wires, relying on the authority of scientists, claim that audio anomalies are BS. However, people who confidently perceive this component of sound are forced to make another, the only possible conclusion in this situation: the electrical and acoustic signals contain some additional signal(s) that are still unknown to science, and which we perceive with a certain sixth sense.

If there are no electrical changes in the signal, then there are no acoustic changes, respectively, hearing does not participate in the perception of anomalies. What other options can there be?

Regards.
anton_stepichev
@manueljenkin1,

It's a pity that Eric Juaneda from Junilabs ignores the questions. My friend asked him to explain the principles of file optimization or just say something about this interesting thing. He didn't answer.
In the meantime, I became convinced that there must be some non-physical explanation for the change in sound that we feel. It is strange but the only thing the optimization program does is load the file into memory, wait for a while, and write it back to the hard disk. At least that's what the programmer told me when he de-compiled the file and analyzed the code. Why this changes the sound of the file is unclear.

I've done some research on how people feel about the difference in optimized files. There is no repeatability here, some prefer original files, some - optimized, some non-audiophiles do not even feel the difference at all.

However, here we certainly have another confirmation that digital audio is far from perfection and audiophiles feel the difference in the sound of files with the same checksum. And there going to be some more then just conventional physics to explain this phenomenon.

The developer of this player did answer questions of another person who asked similar questions. I guess he is getting repeated questions, and hence not finding time to respond. It loads to RAM, does an "optimization" (specifics not described, but looks like its there in the code), waits for a couple of minutes and then stores back to drive.

Regarding user preference, I actually am not fond of results of first optimization - it sounds a bit distant and veiled, even though it clearer than original file. But run the same through optimization process 3-4x and all the veil is gone, and the clarity remains (and actually gets better) and now it is definitely much better than stock file on all aspects. If the preference to original files among the users were in comparison to first optimization, I recommend giving 3-4x optimization a try. At present it doesn’t seem possible to do optimization for multiple files at once, so that’s a cumbersome task. I can hear the differences for sure, and I am working on getting a true double blind test done (its not an easy task to do one that doesn’t have loop holes).

Regarding why it works, I think it is well within conventional physics, we just need to analyze it deeper than our current FFT analysis methods (we are analyzing only a very small subset of test tones at present mostly and I don’t think much conclusive results can be obtained from this). In a normal storage disk, every bit is stored as a set of charges in a cell (typically a floating gate nand cell), and the scenario in which the write action happens can likely manifest in differences in the structure of charges and magnetic fields stored in the cell that the next access after optimization may have either lesser noise or lesser correlated noise. Also to note that RAM and normal storage work in different ways. PC RAM works as a Dynamic Random Access Memory unit with constant refreshes (volatile memory) and Normal storage is non volatile and retains data once stored.

Digital circuits work just with thresholds. Above a certain threshold it is 1, below it it is 0 (or vice versa in some implementations), and there are boundary conditions which the designers have to work hard to ensure data integrity is maintained. This is the reason why you don’t magically get infinite clock speeds. There’s more to it in modern devices (they are multi, triple layer cells etc) and there’s a lot of algorithmic stuff that goes on to it.

There’s a lot of hardwork in making a reliable working digital system, but it’s even harder when you get into analog systems. The problem with analog/mixed signal systems though is that it’s not merely working on thresholds. A fair amount of noise may be mostly harmless in a digital system but will cause significant issues with an analog/mixed signal systems as every single flaw/deviation will cause deviations in the analog circuit (the dacs) and later get amplified in the buffer and amplification stages. So any of the activity you do has a potential manifestation in the analog circuit, and any task that reduces noise at source can be beneficial. Grounds act as common points to transfer noise from one place to another. You can claim optical isolation but it is more fairytale than reality. They have their own jitter and noise footprints and any attempt to correct it will have its own jitter and noise footprints. If you’re thinking transformer coupled isolation, they have non linearities (real world magnets don’t magically follow an ideal abstractions), and other leakage phenomenon (AC noise leakage over copper ethernet has been measured and demonstrated). And I would like to add that the improvements to SQ by this player is audible even through ifi micro idsd bl which does have some form of galvanic isolation afaik.

Any circuit can always be tweaked to fake numbers to specific scenarios while not being truly capable in other scenarios, and hence measurement charts get unreliable. It is impossible to get full test coverage for any analog design at present. I think of audio measurements generally shown to be similar to some vague synthetic CPU benchmark tweaked to show as if a cell Phone CPU beats a supercomputer (Maybe it does at that specific calculation in that specific optimized software, but not likely for a real world task that the cell phone CPU cannot handle, or an emulation layer on the supercomputer with the same code might run faster!).

Yes there are massive amount of layers, buffers and Phys present through the chain. And of course software abstractions, and each abstraction layer = generally longer non optimal code = more processor and component activity = more switching noise, and of course there’s more considering the speculative execution etc and these are accounted for with many of these audio software. There are many of them try to work at a lower level language with less abstractions (some written in even assembly level language code), and hence lesser noise (one general example is using kernel streaming). So the whole thing actually reinforces the benefits of a customized software system.

It indeed is phenomenal that the data storage access noise seems to pass through all these layers but if you consider the path, none of them have anything to compensate for the fluctuations, and as long as it is within thresholds of digital circuit operation it’ll be passed through (but analog and mixed signal systems are picky). It indeed is profound that this distinct improvement is not buried within noise generated from the rest of the link.

Now if you were considering issues from other CPU activity during idle tasks, like say displaying a wallpaper, it would be a gross approximation to think CPU generates all pixels at every instant of time and loads into GPU memory for displaying, then there is no purpose for a GPU. GPU has a parallel pipeline to generate these, has its own architecture that might have its own noise patterns (need not be as high as cpu for the same task) and send via hdmi port, but it could very well be almost completely decoupled from the CPU data lines going to USB! Do they influence each other? Very likely. Can one completely mask the differences of the other? May or may not be! It’s about reducing issues in any area that is feasible. There’s also something known as correlation. Certain types of noise correlate more to audio issues (8khz tizz from 125us polling if the system priority is too high, or other issues which cause sudden spike during these polling) than others. So it’s not quite as direct as things may seem, and of course this area is too profound so we don’t have any well established conclusive correlation metrics yet (and unlikely anytime soon, we haven’t even figured out human hearing beyond a certain basic abstraction). Also not to mention, a lot of the computer tweaks do have modes to remove image displaying load on the cpu, or even going fully headless/commandline.

What about the abundance of switching components throughout the motherboard? PC pcb design is generally very high level stuff (very large multi layer PCB), and the power supply design (regulators etc) is are extremely sophisticated, especially the ones feeding the CPU. A 12V supply is regulated in multiple stages to ensure that there is enough buffer in place to take any disruption that changes power consumption would bring and it is generally very low noise because it’ll have to run through multiple layers in the CPU. Can they be improved by a better power supply input? Surely yes, and a better power supply input can also help the rest of the pcb, but I will have to say they are generally extremely well designed. There’s massive developments on this front on the low power area, and it has also been successfully expanded to certain areas in audio - The new Burson Audio amps uses a SMPS design that sounds very good. You can afford to do this much level of buffering and filtering because it is power (a specific fixed voltage and current with some transient deviation). But you can’t do this multiple levels with data which is a switching sequence of pulses or else you’ll be losing speed. There’s not much ways to fully control the noise on the data line other than controlling your software.

Ok why not a raspberry pi instead? Well just because something is lower power doesn’t necessarily mean it is lower noise. The consideration in most budget SBCs are mass production at a very affordable price and the components used are unlikely to be of any quality comparable to say a high end motherboard, let alone a server motherboard. In fact you’ll likely be getting worse aberrations even on the data integrity (unlikely to be an issue with data rate of audio though) and will need just as much software changes/usability compromises anyway. As mentioned above, the research on the components for Desktop Motherboards are extremely high level. One can try to customize everything from ground, like many companies doing digital transports do, but it’ll get crazy expensive pretty quickly, or leverage all the development on Desktop PCs, and just try to control the few aspects they didn’t optimize for with respect to audio and noise (will have to give up speed and ease of use in that scenario, but just a reboot into another OS and you’re back with a fully functional PC that can be used for any other task).
listening wires is far more sophisticated than listening music, so why make your life harder?
@mahjister,
Have you ever tried blessing or praying on wires or components and see what happens after?
Science isn't science after all 
@manueljenkin, I see you are a pro in digital, a lot of special information, thanks. I wonder how it can help us. You know, no matter how complicated the situation in hardware and software are, if we going to play two files with similar checksum on the same computer, they should sound identical. But in our case they don’t.

So may be they are somehow not identical? Can you check the sameness of the files? Your opinion is quite interesting.