The current type of measurements as practiced over at ASR and similar sources just are not predictive of how a product will sound. This is patently obvious, no correlation to subsequent sound quality at all.
How can you possibly state this with such confidence without proving that you or anyone can reliably detect differences after 2 components have been tested by a sight like ASR or equivalent.
- THD+N from 100mW to max power at 20Hz, 500Hz, 1-5-10-15KHz.
- Power versus distortion single frequency from 10mW to max
- Frequency response at 4R, which would allow extraction of output resistance
- 32 tone inter modulation tests. This would represent real music.
- I saw a 2 ohm test on a recent amp from 50mW and up
ON DACs add:
- frequency response at various input sample rates and with the different filters the DAC offers.
- jitter test
- usually tests all the input types, but not consistent
I am aware of some videos highlighting some potential corner conditions (at least with DACs) that ASR does not test for, but which may also not be an issue with real music. This still brings me back to my first paragraph. With real music can you detect issues?