In other words, although every comparison provides an answer – the real issue is, what was the question? In this case, I wanted to extract the best possible performance from each of the two different machines and the answer I got was that they’re close enough in terms of musical accomplishments for external factors (required functionality, system context, room and programme material) to have a significant impact on any listener’s preference. This is no slam dunk. There is no ‘best’ – unless of course, you want a streamer into the bargain, in which case the Studio Player is the only option here. Let’s not lose sight of the fact that this is a comparison of disc replay, not a comparison of two (non-equivalent) products –that first and still important major variable.
But back in the real world of people ranking or choosing audio equipment, how many of the comparisons on which such judgements are made involve even a fraction of the care, attention and time outlined above? How many variables, known and unknown, come into play? In a customer context, with a given system, some or all of those variables will be nullified – but then so too will any general conclusion to be drawn from that specific experience. When it comes to trying to gather second-hand opinion and reportage in order to construct some kind of absolute ranking, cobbling together conclusions from what you’ve heard at shows, what you’ve heard in different systems or what others describe, it’s a fool’s errand. It’s alarming not just how many online comments/commentators blithely ignore that, but how many reviewers and show reporters (who really should know better) ignore it too.
“In a time of deceit telling the truth is a revolutionary act.” George Orwell
If you want me to tell you which is best, the CH Precision D1.5 or the Wadax Studio Player, I can’t do it – and I’ve probably spent considerably more effort on the comparison than almost anybody else. Can I tell you which I prefer and in what context? Yes, I can – but that’s for the review, rather than this article. Likewise, if you want me to tell you which is likely to offer the best results in a given system situation and with clearly defined listening/programme preferences, I can attempt that too.
To put that in context, let’s assume for a moment that I’d used an Audio Research Ref 10 line-stage, in place of the L1/X1: it’s safe to assume that the voluminous, soft and warm sounding ARC would have elicited quite a different reaction to these two players, the Wadax potentially being altogether too much of a good thing, with the quick, clean and agile D1.5 offering the perfect antidote to restore a degree of balance overall. In many ways that’s what makes the analysis of a product’s nature (and optimisation) the most useful part of any review, rather than spurious attempts to ascribe a ranking in terms of absolute quality. The real goal of any review should be to decide whether the product being reviewed is a triangle, a square or a circle – so that readers can judge how well it will fit the hole in their system jigsaw. What this article questions is the validity of absolutist judgements – and their limitations: Why there really is no best product – only the best choice for a given listener and situation.