Please Educate Me


If I can’t find the answer here, I won’t find it anywhere. 

Something I’ve wondered about for a long time: The whole world is digital. Some huge percentage of our lives consists of ones and zeros. 

And with the exception of hi-fi, I don’t know of a single instance in which all of this digitalia isn’t yes/no, black/white, it works or it doesn’t. No one says, “Man, Microsoft Word works great on this machine,” or “The reds in that copy of Grand Theft Auto are a tad bright.” The very nature of digital information precludes such questions. 

Not so when it comes to hi-fi. I’m extremely skeptical about much that goes on in high end audio but I’ve obviously heard the difference among digital sources. Just because something is on CD or 92/156 FLAC doesn’t mean that it’s going to sound the same on different players or streamers. 

Conceptually, logically, I don’t know why it doesn’t. I know about audiophile-type concerns like timing and flutter. But those don’t get to the underlying science of my question. 

I feel like I’m asking about ABCs but I was held back in kindergarten and the computerized world isn’t doing me any favors. Now, if you’ll excuse me, I have some work to do. I’ll be using Photoshop and I’ve got it dialed in just right. 
paul6001
Our DACs are processing digital information the same as our microwaves, yet we judge them very differently. The difference I believe is that music is an aesthetic pursuit - thus the method of delivery comes under more scrutiny.

... thus the SONIC RESULT comes under more scrutiny


@paul6001 - typing a letter on a keyboard is DIGITAL to start. the english language is made of letters which when keyed are digital. Represented by 1s and 0s in the disk's memory. If you talk about handwriting interpretation that is another matter, there is some analysis conversion taking place.

Stick to arts please.
Digital information is discrete, analogue information is essentially continuous. The pressure waves that come to your ears are continuous. How you convert the discrete information into continuous by interpolation is one of the main issues. Draw a set of axes and put some dots on in that form a sloping straight line, or a sine curve if you want. That's your digital information. Exact and the same for everyone. Now draw a continuous line that goes through those points. As long as it goes through those points it matches the digital data. It's how smooth you are between those points that changes the final graph. Did you join each point up with a straight line or did you "wiggle" a little between the points. Any time you  convert from digital which is discrete, to continuous, which is analogue, you have to will in the missing lines between the points. How you chose to interpret those missing pieces will change the "sound" of the output way. Differences in the execution of those algorithms in terms of weighting, speed etc. change the outcome of the waveform. Once the waveform is analogue, and in wires, it is them open to corruption and interference from other sources, etc. etc. etc, Ultimately, somewhere in a system the conversion from discrete MUST occur. It's all about how it occurs.
peteraudio21
Digital information is discrete, analogue information is essentially continuous. The pressure waves that come to your ears are continuous. How you convert the discrete information into continuous by interpolation is one of the main issues.
You are mistaken. We know from the Fourier Transform that the continuous analog wave can be represented using digital data and Nyquist Theorem (it's not a theory) proves that the reconstructed wave will be a perfect analog of the original, within the bandwidth of the system. There is no interpolation involved with the possible exception of when an error is encountered and can't otherwise be corrected. That's actually pretty rare.
... Any time you convert from digital which is discrete, to continuous, which is analogue, you have to will in the missing lines between the point ...
No, that's not how digital audio works. If you want to understand why, you might want to watch this video.
Our DACs are processing digital information the same as our microwaves, yet we judge them very differently.
Our microwave is NOT interconnected to other gear....We dont use our ears to listen to microwaves...

Bits are always bits and waves.... This is not and never can be argued against....Fourier theory like says cleeds is rigorous maths...Wave=digital....But there exist MANY type of dac implementation with differences...

And our ears listen to the results of multiple gear interactions(cables amplifier dac and speakers power conditioner etc) in their THREE working dimensions: mechanical,electrical and acoustical...

We dont listen to pure  bits but to differently processed and smpliled bits and , we listen to the end resulting sound of many interacting factors in their 3 working  dimensions in a specific system and room and house with specific ears...

Why in the world could we wait for the same results  ? They are  coming from different dac or they are  coming  from  the same dac family perhaps but in very different working conditions ?

Because of the system working dimensions implementation in our specific home that are NEVER the same...I called these working dimensions, the system embeddings controls...