MQA and the "Pre Ring - Post Ring" Hoax


There's been a lot of misinformed babble on various audio forums about impulse response, digital filters, "time errors", "time correction", "time blurring", and similar pseudo science clap trap to convince audiophiles that suddenly in the year 2018, there's something drastically wrong with digital PCM audio - some 45 years after this landmark technology was developed by Philips Electronics engineers. Newsflash folks - it's a scam.

First, let's take a close look at what an impulse or discontinuity signal really is. The wikipedia definition actually is pretty accurate thanks to a variety of informed contributors from around the globe. It is a infinite aperiodic summation of sinusoidal waves combined to produce what looks like a spike (typically voltage for our purposes) in a signal. Does such a thing ever occur in nature or more importantly in our case - music? Absolutely not. In fact, the only things close to it are the voltage spikes that occur when a switch contact is thrown or an amplifier output stage clips because supply voltage to reproduce the incoming signal waveform has been exceeded. So if this freak of nature signal representation doesn't exist in nature or music, of what good is it in measuring the accuracy of audio equipment? The answer might surprise you.

In fact, impulse response, or an audio system's response to an impulse signal, is one of the most useful and accurate representations in existence of such a system's linearity and precision - or its fidelity to an original signal that is fed to it.  A lot of  focus has been placed on the pre and post ringing of these "discontinuity signals"  but what you have to understand is that the ripple artifacts are nothing more than an analog system's (all electronics is analog -digital is just a special subset of analog) limitation in attempting to construct the impulse or discontinuity signal waveform. They are a result of the impact produced by the energy storage devices themselves in creating the signal. To create a large energy peak, you need large storage devices. The larget the capacitor for example, the longer in time it takes for it to absorb and discharge electric field energy. This is the same with inductors. One type stores electric field energy - the other magnetic. Smaller value capacitors can react to voltage changes very quickly but are limited in the peak value of energy that can be stored and dissipated. But if you combine a large number of high value and low value devices in a circuit and apply a voltage spike, you wind up with the kind of oscillations you see in an impulse response graph. Small capacitors for example, rapidly reach their charge capacity and can discharge into larger capacitors that are much more slowly building up charge in the transition from no input voltage to full spike value. This "sloshing around", if you will, or oscillation is what happens in circuits built to provide extreme voltage attenuations. In a linear, time invariant system, any rapid change in frequency response or time response - has these characteristics.
So effectively the entire debate about ringing in digital audio is a misnomer - a hoax. The impulse response ripple is not something that happens in real world sounds or in a properly designed audio reproduction chain. Ever since digital oversampling was developed in consumer products in the early 1980s, there has been no need for steep analog filter circuits with their attendant ringing. The problem very simply DOES NOT EXIST. The ringing generated  artificially in an impulse signal is useful in that it provides a very high frequency stimulus to linear audio systems as  a means of measuring high frequency and transient response. IT IN NO WAY BY ITSELF, REPRESENTS THE TIME DOMAIN BEHAVIOR OF THE AUDIO REPRODUCTION CHAIN. An accurate audio reproduction system should fully render the impulse signal in all its pre and post ring glory without alteration. Any audio system that eliminates or significantly alters this pre/post ringing present in the signal that is fed to it is not truly "high fidelity" and is thus bandwidth limited.
cj1965
No idea what this fellow is going on about...but I do have a lot of recording studio as a musician experience. I have A/B’d MQA vs non MQA countless times on my two high end audiophile systems at home. Any MQA version only sounds to my ears as a slightly different studio mix from the previously available version. It never sounds better ... or worse.
" The way this expertise is simply thrown into the wind in these  discussions, flooded by arguments that are, put in diplomatical words, two or three floors below the level set by Craven and Stuarts, makes me cringe! " - pegasus

Really? Two or three floors below level set by Craven?

TRANSLATION:
 Just another useless "audiophile" comment aimed at attacking the messenger's credibility without any factual or objective basis whatsoever. This thread is very straight forward and simple - Craven et al. are using a phony argument about impulse response ripple to try to insinuate that such a phenomenon is present in everyday  digital sound recordings. It is very clear from the Stereophile impulse response graphs that MQA is doing nothing more than adding dither noise to hide the pre and post ripple associated with the impulse input signal. Additionally, the "origami fold, unfold, deblurr " BS does nothing but add phase delay (distortion) to the primary impulse peak (see negative going pulse just after MQA enabled DAC response that doesn't exist in the non MQA Brooklyn DAC response). 

If you have anything to say about the technical facts presented here, please direct your comments to those facts - possibly citing some facts of your own. Otherwise, spare us the "Mr. Craven et al are several levels more brilliant than anyone who is participating here in this thread". Your unsubstantiated insults are not welcome. Play the ball - not the man.

As for critiques of the original Sony/Philips PCM approach with steep cut off filters 35 years ago - no duh.  It was clearly pointed out at the beginning of this thread that oversampling solved the "ringing problem" in digital audio before many of the readers who come here were even born. And no, Mr. Craven's "appetizing" filter (pun intended) doesn't resolve the distortion problems created in those early recordings.

There is no need for any of Mr. Craven's security encryption schemes disguised as sonic improvements. The only potential need in the industry that exists is to take the current lossless standard and make it more efficient - some scheme to detect the dynamic envelope of every  file that is to be streamed and apply only the bit depth necessary to transmit the particular file. It's a very simple concept but because it doesn't involve "protecting the family jewels" and dramatically increasing profit, no one in the recording industry is bothering.


There is no need for any of Mr. Craven's security encryption schemes disguised as sonic improvements. 
I agree (if it would be the only aim of the idea).

Play the ball - not the man.
... and try not to kick balls in a glass house.

Actually I made several technical comments:
Of which my main point: I wonder why no real, total AD/DA loop measurements are shown anywhere.
The other being the measurement with "correct impulse responses", ie. measuring a DAC not only a not existing, abstract sequence of (one) sample.

Regarding your technical arguments elsewhere:Since when is latency a distortion...?It may be a limiting factor for practibility reasons, or a simple inconvenience. But in replay audio it is (AFAIK) of no concern at all.
(It is of concern in live electronics, PAs and filtering.)

I can understand that eg. a mastering engineer isn't too enthusiastic about an additional treatment of his files for (at the moment) dubious advantages or dubious advantages in the market place.

I asked MQA for MQA treatment of my recordings and got no answer. :-)

My main doubts when SACD and DVD-A came out, was that
a) the differences between DACs were IME considerably more audible than the difference between the SACD and CD layer on a good quality Sony player. (And I didn't prefer SACD on every aspect). 
b) the user interface of DVD-A was f* up from the beginning.: A sad tale.
The copy right protection of DVD-A was also of Bob Stuart origin...
c) there were some simply amazing PCM recordings on DAD (Classic Recordings), ie. a non copy protected standard format. These sounded better (for me) than any DVD-A (or SACD) I have.
And - funny! - they were transferred from analog tapes. 

I feel (...) that the stereo vs. multichannel, and SACD vs. DVD-A format insecurities helped to move audiophile audio into the back yard of consumer electronics interest.
MQA most probably wan't help either. I'm sorry about that.

And I agree that sonically optimizing CD-replay is much more relevant to 99.9% of the potential public and 99.9% of the recordings, ie. for the music.
And I agree that there are some mind-boggling high quality recordings in the CD-format. So it's the quality of the recording itself that matters most, ie. the microphones, electronics, rooms, setup and mastering.

Read cj1965’s original post several times and not sure I understand or agree with his central premise -- that "The impulse response ripple is not something that happens in real world sounds or in a properly designed audio reproduction chain."

Regardless, the new RME ADI-2 DAC allows you to select between 5 different Filtering settings that change DA Impulse Responses. These range from traditional oversampling ringing to NOS which is essentially perfect as regards impulse response (no ringing at all).

The 5 alternates sound audibly different and can be chosen to improve the musical genre: e.g., the "Slow" option opens up orchestral textures and allows more "breathing" room to make the texture more realistic.

The NOS option is extremely accurate, almost painfully clear, and not to my taste at all for longer listening sessions. It does provide a shockingly realistic sound for popular recordings.

BTW your system must be audiophile quality to distinguish between the various options.

Try it and see -- and, oh, it will be hard to find a ADI-2 DAC because they are new and so popular. But the same facility is available in the RME ADI-Pro 2 which has been available for several years. Perhaps other DACs on the market have this option and let me know if you know of others.

" Of which my main point: I wonder why no real, total AD/DA loop measurements are shown anywhere.
The other being the measurement with "correct impulse responses", ie. measuring a DAC not only a not existing, abstract sequence of (one) sample. " - pegasus

The above statement clearly demonstrates that you don't yet understand what an impulse response test really is. The folks at MQA have been banking on this problem to assist them with the smoke screen. Again, read the beginning of this thread. For emphasis ( I don't know how to use bold type on this interface)

IN ITS TOTALITY, AN IMPULSE RESPONSE IS THE FULL CHARACTERIZATION OF THE TIME AND FREQUENCY DOMAIN BEHAVIOR OF ANY LINEAR, TIME INVARIANT SYSTEM UNDER TEST.

Please read the above over in your head several times. If there is any term contained therein that is unclear or confusing, please let me know and I will do my best to try explain it to you. Audio systems are considered by most engineers who build them to be "linear, time invariant" systems - or at least - that is the goal.
The impulse response plot posted by Stereophile of the MQA and non MQA DACs show latency distortion as well as added noise in the MQA file. Whether or not this is audible or audibly pleasing/objectionable to the average listener is and likely always will be a matter of endless debate. What is not in debate is that it IS DISTORTION. Any distortion you want to talk about in these kinds of linear system approximations has its origins in energy storage - whether its a standing wave in a speaker cavity or a simple phase delay in a first order crossover network. When a signal's voltage and current go out of phase, distortions result and are typically detected in the form of even and odd ordered harmonics. The more rapidly and intensely energy is stored, the more harmonics are produced regardless of the level of damping (resistance/loss) applied between the storage elements. LATENCY = ENERGY STORAGE = DISTORTION.  Simple phase delay networks that involve linear phase changes may appear to be "distortion free" but that only depends on the "working bandwidth" or frequencies of interest. In a linear, time invariant system, time and frequency distortions are derived from one another - different representations of the same thing.

So your subsequent statement -
" Since when is latency a distortion...?It may be a limiting factor for practibility reasons, or a simple inconvenience. But in replay audio it is (AFAIK) of no concern at all. "

represents further proof that your knowledge level is lacking. There are plenty of filtering tricks one can apply to reduce undamped oscillation in a circuit. Linkwitz-Riley crossovers come to mind. There is a faint reference to this technique in the original Sound on Sound BS article put out to promote Mr. Craven's "apodizing filters" - essentially cascading buffered linear phase filters to achieve rapid rolloff without some of the deleterious affects of single stage steep crossovers. ( I found no reference to Linkwitz in the original "Craven's a genius" article, btw.). But if you have actual experience with these types of circuits and have done distortion measurements on them, you will find that total harmonic distortion creeps up as the amplitude of the signal drops off in the transition band of the filter - buffered Linkwitz-Riley or not. There is no free lunch. And it looks like others are waking up to the fact that what Stuart and Craven are offering is more like  reheated left over meatloaf than a miraculous "free lunch".