Holographic imaging


Hi folks, is the so called holographic imaging with many tube amplifiers an artifact? With solid state one only hears "holographic imaging" if that is in the recording, but with many tube amps you can hear it all the time. So solid state fails in this department? Or are those tube amps not telling the truth?

Chris
dazzdax
Dazzdax, I am sorry my comment subverted your thread. I had no idea this would happen.

Detlof, I don't share your opinion about you is being logical and who is not.
The topic of distortion perception is a fascinating one. In my opinion, it makes sense to focus on those distortions that are subjectively objectionable, but not worry about those that are of no audible consequence.

At this point I'm completely unconvinced by Roger Paul's claims. I do not believe that frequency can be modulated by modulating intensity in an electronic circuit. And if such modulation is occuring on a low-level scale, I do not believe that it is of any audible significance. Based on some of Roger Paul's examples, I think it would be obscured by a well-established characteristic of human hearing called "masking". Briefly, masking refers to the ear's tendency to completely ignore a low-level signal that is close in frequency to a simultaneous high-level signal.

If Roger Paul is dealing with changes in gain of 1/100th of a dB or less (as he claims), then any hypothetical doppler-type frequency-bending (which I do not believe takes place) would be completely ignored by the ear.

Duke
dealer/manufacturer
I think audiophiles have spoken with their wallets and interest.

One misguided owner here does not a successful product make.
I would like to make one more point if I can and then I think I will pretty much give up. I am not trying force my opinion or concepts on anyone. I do think that If there was perhaps a more proper way to “explain” it that we would all be on the same page. Take this example –

You are in a room filled with people at a party. You are having a conversation with someone right in front of you. Now you decide to listen in on another conversation (eavesdrop) that is taking place off to the side of you and is several feet away. Without turning your head -you pick up on everything they are saying and all during this time you have missed anything said to you by the person right in front of you. As soon as you turn your “attention” back to your partner – you can no longer eavesdrop on the other conversation. Why is that? And how
is it that you can turn your “attention” without turning your head?

Here is how this works. Your ears are fed by streaming acoustic energy that has many properties but there are really only two critical properties. One represents the raw acoustic data such as the changes in air pressure that represent the “sound” made by an object. The second is the differential phase caused by the separate arrival times of this raw data to your two ears. Of course if you are looking directly at the sound object – the arrival times will be identical (null). If you turn your head to the right then your left ear is slightly closer to the sound object and therefore the raw data arrives there first. The difference in arrival times will provide the brain with a specific idea (like a global positioning system) as to the physical location of that sound source. None of this should come as news to anyone that understands the basics of how we hear.

Here are two ways to hear the “other” conversation. One would be to turn your head toward the direction of the physical location until you are looking directly at them. At which time your head and your attention is fixed on them. It is also a time that your brain has chosen to lock onto raw data that has the same arrival times. If it does not have the same arrival times then the brain will reject or filter out differential arrival times thus ignoring other sounds as not important or of interest.

The second way of hearing the “other” conversation is to keep your head straight ahead as if you were still involved in your primary conversation – but allowing your brain to “scan” for raw data that is arriving at increasingly longer differential times until what you hear is of interest. At this point you are willingly latching onto this specific delay in arrival times as important and rejecting arrival times from other sources including right in front of you. You have become internally phase locked to an outside physical location.

Even if you briefly turn your head directly toward them and back to straight ahead. – your brain will easily compensate for the rotation of your head keeping the phase lock in place.

The moment your partner snaps your attention back to him your lock on the differential times is broken and you default back to the same arrival times. The other conversation is now suppressed by the brains ability to filter and select what it hears.

It can be seen from this that the ear/brain system is a highly sophisticated mechanism for discerning the physical location of a sound object when you include the massive ability to detect phase/time relationships that are on a scale so small as to be unbelievable.

There is one very important factor that has everything to do with this human gift. And it is something we rely on heavily. Sound travels at Mach one through the medium of air. It is a constant. It does not vary. This is the one ingredient or component of the larger formula used by the brain to accurately pinpoint a sound object. If you tamper with the velocity of the medium you can confuse the brain about what it thinks is the precise location of an object. In fact the brain is smarter than that and can recognize the instability of the medium to maintain a fixed velocity and immediately “knows” this is fake. In real life the speed or velocity of sound is virtually written in stone. (Ignoring long term changes in temperature, humidity etc.)

My circuitry is designed to provide a velocity-stabilized amplifier that is more in tune with the type of stability you expect to find in air. The closer you come to the stability of Mach One – the more your brain accepts the notion that your are in the same room with these people. IOW if you can feed your brain the raw data at Mach One – the listener will have the same acoustic sensation as someone who was actually present during the initial recording. H-CAT is a method of metering the output velocity so as to allow the brain to work in a familiar stable environment. The brain begins to trust what it hears as first hand and not a poor attempt to recreate a sound event riddled with phase errors.

The bottom line is that your audio system must play to your brain – not your test equipment.

Roger
The speed of sound in air is always varies depending on air pressure, elevation, temperature and humidity.