Directional cables - what does that really mean?


Some (most) cables do sound differently depending on which end is connected to which component. It is asserted that the conductor grain orientation is determining the preferential current flow. That might well be, but in most (all) cases the audio signal is AC (electrons going back and forth in the cable), without a DC component to justify a directional flow. Wouldn't that mean that in the 1st order, a phase change should give the same effect as a cable flip?

I'm curious whether there is a different view on this that I have not considered yet.
cbozdog
If this is what you have leaned in Part 1, which took four years, I would hate to see what you are going to learn in Part 2 and how long it will take.

….."But we are only interested in the signal moving toward the speakers, the direction that affects the sound. We can forget about the signal when it’s moving in the opposite direction. 🔜 That explains how a wire in an AC circuit can be directional.".....

Color me confused, the above explains that the signal is directional, as it travels towards the speaker, it does not explain why the construction of the wire makes it directional, or whether the construction alters the sound produced whether hooked up one way or another in relation to the extruding process.  
That’s in Part 2, silly goose. 🦆 First things first. Part 1 explains why wire or fuses in AC circuits are directional - assuming a physical non-symmetry of the wire - and establishes what the audio signal actually is and isn’t. By the way, we’ve already shown the theory regarding the drawing of wire through a die has some problems. So if anyone has any brilliant ideas feel free to chime in. 

Pop quiz 🤗 - The skin effect states very high frequencies travel on or near the surface of the wire. People sometimes claim the skin effect is the reason wires are directional, i.e., high frequencies are distorted in one direction 🔚 but not the other 🔜. But the “signal” is current and voltage. It’s not (rpt not) audio frequencies traveling down the wire. So, what’s going on?
jetter, (and to all)

The human ear is like a diode. In the most serious way. We only hear the leading edges of transients and micro-transients and their interrelations in time and level.

The rest, the ear does not hear, in the literal sense. The cilia of the ear are pushed back by the positive wavefront, and the signal is sent though the nerve system, for a certain amount of time. The longer the time it is pushed back, greater the loudness perceived. Ie, this is how damaged cilia in the ear, from extreme sound levels, will create the condition of tinnitus.

This is easily observed, the whole positive transient signal only thing, via looking at the output of a horn speaker. Where the distortion of the waveform varies between an approximate narrow bandwidth 15% distortion (at the horn’s best coupling frequency) to an average of 20-40% distortion.

Yet people perceive that horns are low distortion. due to how the ear works.

the same point of analysis is functional in all aspects of audio signal handling and design criteria.

Basically put, 100% of the human hearing system is tied into intently analyzing signal by using the world’s best FFT analyzer and computer (the ear brain)..on approximately 10% of the audio signal, and the signal that is looked at..is the place where nearly 100% of the distortion of a signal, via the gear (any gear), takes place.

Basically, if you had to look at the spots in the signal itself that all gear distorts..all of the distortion would be crammed into the transients and micro-transients.

Engineering makes the HUGE MISTAKE, the FUNDAMENTAL ERROR, of taking 100% of the signal and then mathematically calculating a distortion figure for the given scenario.

Which is fine, if all you are doing, is engineering. But not really, when you think about it.

In this case, you are trying to correlate your measurements to what is HEARD BY HUMAN EARS. If one is not doing this, then the math and the testing and the regimen, is invalid.

If we go back and weigh the measurements, to commit to distortion analysis , the distortion which occurs ONLY in the transients and micro transients, just like the human ear does, then the measurements begin to properly correlate to what the ear says it is hearing.

The ear is not wrong, the people who trust their ears are not wrong.

The engineering tack and methods, as they stand... are totally off in the wrong direction, in a different area and not valid for using in audio.

Now..transient function in a cable, well, that is a complex matter, where the complex impedance is dynamic and shifting, in those transient and micro transient signal load scenarios.

This a 1000% more true for a fuse. With a fuse, the dang thing is designed from the ground up to respond and clamp down on transients and micro transients, and dynamically shift it’s complex impedance in an upward fashion, while dealing with transients.

We can hear fuses, we can hear distortion, we can hear cables, we can hear and differentiate equipment via hearing...due to how distortion occurs in all things audio and electrical- and how the ear works.

The problem is the engineering mind and it’s inability to connect to the understanding of how the measurements and their weighting NEED to take place.

This is due to the engineering minds (that try to argue this subject) not grasping the scope of the whole situation, ie they are missing half the equation.

the next "but..yeah.." pouty argument to come out of the engineering mind ....is that the folks with the good ears can’t possibly hear that stuff..as THEY (the engineering minded folks, many of them...) can’t hear it.

well, well, well....

Here comes the ugly part, for the human ego. Ears are like the mind, like IQ. They vary. From moron to genius level. Some can hear complex signals and decode them without self lies, while another brain ear combination cannot unravel the complexities of the signal at all.

To boot, the latter mind and ear type...superimposes learned prior signal memory on top of new ones, to simplify listening and speed up signal recognition in the brain (part of ear/brain language decoding in situ and in real time). Deaf, brain dead... and repetitive. Literally. Ouch.

Thankfully for some (or anyone if they put in the effort), the ear, like the mind, is plastic and can learn. But the story of the ear is quite complex and might take something akin to a chapter from a book level of writing for me to sit here and explain it all, and that is not happening.

To witness the ear working in the conscious mind and to see or witness the ear putting it’s filters into position and then doing living breathing FFT analysis thing: just put earplugs in at the mall, walk around for a few minutes to half an hour and then pull the ear plugs out.

At first the filters of the brain will be off (they’ve relaxed and aligned to a different orientation) , but in the first 2-3-4-5 seconds, you can hear and note them coming on-line and filtering the giant wash of sound so you can decode out of the whole noisy mess. Pay attention to those first 2-5 seconds, all the live and living brain-ear action happens right then.
Thanks very much for this clear reflexion …. Better said than I was able to say...My best to you Teo_audio.
This is closer to reality. As long as we’re talking about it. All of that cilia of the ear and how the neurons transfer energy is SO silly and old school. Silly cilia! Auditory specialists and neuroscientists are long, long way from Tipperary. In reality the brain acts like a transceiver. A transceiver that’s tuned to many frequencies not just audio frequencies and not only conscious information.

From my website, “How the Clever Little Clock Works.” (Excerpt)

Chronological Memory
Our internal clock controls how memories are stored and retrieved. If we wish to reminisce about a favorite book or movie we can recall the highlights fairly easily, and if we put our minds to it we can remember scenes from movies in remarkable detail - scenes many hundreds of frames or more in length. A movie’s images and sounds are "videotaped" by our eye and brain, then stored in memory chronologically. Furthermore, the movie’s images are integrated, synchronized with the movie’s soundtrack in memory. If our memory was not chronological we wouldn’t be able to recall high-density, multiple-frame scenes from movies and replay them in our mind’s eye. The brain even has a Scene Selection feature similar to a DVD player’s so we can consciously select specific scenes from a movie and replay them in our mind’s eye like, say, Remember Sammy Jankis or Memories can be Distorted from Memento.

Our internal clock is always running, whether we’re conscious of it or not. Sometimes we awake just before the alarm clock goes off because we know it’s time to get up. All the things we see and hear and do are time-stamped with Present Time coordinates. Thus, the next day, the following week or ten years later we’re able to associate specific times in the past with our experiences. "What were you doing last Tuesday evening?" "Let’s see, at 8 O’clock I was watching Total Recall. At 9 O’clock I watched Altered States."

So, whether we’re aware of it or not, we maintain a continuous record of events, sounds, words and images, including time of occurrence. We can sing a song from memory or play a musical instrument. With a little effort we can remember passages from books and movies like Roy Batty’s death speech in Blade Runner, "I’ve seen things you people wouldn’t believe. Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the dark near the Tannhauser gate. All these moments ...will be lost... in time... like tears... in rain. ...Time to die."

Time is Relative
The Clever Little Clock addresses an esoteric but fundamental problem that occurs when playing an LP, CD, DVD or any other audio or video media. This problem also occurs when watching taped programs on television or listening to recorded programming on the radio in your car or at home. In all of those cases the observer is confronted - subconsciously - by time coordinates that are different from the Present Time coordinates he’s been using his entire life to time-stamp sensory information. What are these interfering time coordinates, where do they come from and why are they a problem?

The alien time coordinates are contained in the recording (or videotape). The time coordinates (of what was then Present Time) of the recorded performance, millisecond by millisecond, are captured inadvertently along with the acoustic information. When a recording is played, the time coordinates from the recording session (which are now Past Time coordinates) are reproduced by the speakers along with the acoustic signals of the recorded event. Those Past Time signals become entangled, integrated in the listener’s mind with Present Time signals. Because the listener is accustomed to using Present Time signals to synchronize his chronological memory, he subconsciously perceives the confusing, interloping Past Time signals as a threat. This perceived threat produces the fight-or-flight response, which in turn degrades his sensory capabilities. The reason that live television broadcasts, like the Superbowl and the 2010 Olympics, are generally observed to have superior audio and video compared to taped broadcasts is that they don’t contain Past Time signals, only Present Time ones.

The time coordinates on the recording are associated with the 4-dimensional spacetime coordinate system (x, y, z, t), where t ranges between the start time and end time of the recording session. While you could say that t0 of the spacetime coordinate system marks the first instant of the Big Bang, it’s the relative difference between Past Time and Present Time that’s important, not the difference between t0 and Past Time or Present Time.