And of course the listeners were unaware of file type and metadata status for each trial, correct? And the tests were randomized to prevent observer bias? And the results were corroborated across multiple listeners, using multiple trials? And all listeners were unaware of the nature of the changes? And dummy A/B trials with identical files were conducted randomly during resting to create a baseline?
No, not all that was done? Not any of that was done? Because absent that minimal level of basic care in the testing protocols, this 'study' has zero credibility. It's just poor experimental design. Worse, the authors propose no causality attributable to metadata, no 'how' to account for any statistically meaningful differences.