@cleeds First, I learned something useful from the presentation you referenced, that step functions are no longer used in DAC. Thanks for that. However, I still have concerns, which may arise from misunderstanding, which I submit for comment and correction.
It follows that some form of interpolation is being used to convert the discrete sample values taken at discrete time intervals into an analogue signal. The alternative, a smooth perfect fit to the data, appears from the presentation to require the SW to know which frequency it is dealing with (although it might be able to guess, using FFT on previous segments for example, so informing that interpolation). This is important because the talk continues to use this result (a perfect, smooth waveform) as proven, which I do not grant, but also appears to assume mathematically perfect observation (else how could there be a unique waveform which fits the data?).
There seem to me to be only two alternatives: (1) stick with a safe linear interpolation, or (2) guess. But with a guess, sometimes the SW is going to guess wrong (perhaps on transients?), and then the output is going to be far more distorted than a simple linear interpolation would suggest.
Therein lies information loss. What is known comprise the samples and intervals - the rest is processing. I hypothesize that the success of one processing algorithm over another represents digital's progress. Is this correct Cleeds?
Oddly enough, I was just reviewing uniqueness theorems concerning representations of ordered semi-groups, which, assuming perfect information, is pretty much what we are dealing with here. A few points occur to me: (1) samples are taken in finite time, and are therefore averages of some kind (2) samples are taken at intervals of finite precision, therefore there is temporal smearing (3) samples are taken with finite precision, hence further uncertainty is built into each (averaged) sample.
In physics, data are always presented with error bars in one or more dimensions. It leads one to ask, why does this engineer think he has points? Is he confusing this problem with talk of S/N ratio?
These considerations lead us, contrary to the presentation, to the conclusion that we do not have lollypop graphs of points, we have regularly spaced blobs of uncertainty, which are being idealized. However, this also shows that, regardless of the time allowed for sampling and reconstruction, there is an infinity of curves which fit the actual imperfect data. Not a unique curve by any means.
Again, I have Cleeds to thank for refining my understanding of digital. I agree that we can't discuss digital intelligently unless we understand how it works and how it doesn't. Please correct that which you find to be in error.