Drawing Conclusions…

The awkward reality of close comparison.

By Roy Gregory

Comparing different equipment is the very life blood of audio, from manufacturers developing products, to retailers demonstrating them and customers choosing them. It’s the gold standard for reviewing and for ‘rating’ different products, whether the listener intends to buy them or not… With the rise of the internet forum, it seems that you are nobody without an opinion – and you know what they say about opinions? The problem is that it’s becoming increasingly difficult to gauge the value of any opinion, from the prejudice(s) of the holder to the basis of (and pressures on) its formation.

When it comes to audio, these days it almost seems that the firmer the opinion the flimsier its foundations. Too many conclusions are drawn from second-hand experience or fleeting acquaintance, what you heard at a show or in a store, in a friend’s system or from a video on the internet. Drawing firm conclusions about any individual product is a lot more difficult than most people imagine, takes time, considerable diligence and the results will still demand qualification. If all you are interested in is stirring up noise on the interweb, the value of the opinion expressed matters not: in fact, if all you want to do is make noise, then the more outrageous the proposition, the better. But if you actually want to exchange useful information, then you need to be significantly more circumspect. Indeed, I question the whole notion of ‘absolute’ judgement. In doing so, I’m also undermining the whole basis of individual product reviews, questioning their value and the weight placed on them by the industry as a whole, from manufacturers to end users.

The central problem here is simple: except in extremely rare cases, it’s actually impossible to listen to a product – you can only listen to a system. That introduces a whole raft of interactions and methodological challenges that whilst they aren’t insoluble, undermine the entire process because, for the most part, they’re either ignored or haven’t been considered. Depending on where the component fits into the system, the way that it performs is prey to both the products around it, the products with which it interfaces directly and the room in which it finds itself. Then you need to consider the ‘gating’ of information. In any system situation, is one component limiting the potential or appreciation of the performance of others? In other words, is the system you are using actually capable of revealing the full performance envelope of the device being tested: and that’s before you get to environmental considerations, which in our case, include ambient noise levels, the quality and consistency of the power supply and the characteristics of the room itself.

A world of “unknown unknowns”…

Given the complexity of the situation, perhaps it’s more productive to start from the other end of the telescope. There’s a fundamental principle we’d do well to remember. The value of any experimental (or observational/experiential) data depends absolutely on the methodology that generates it. In other words, if you’re looking for an answer, start by figuring out how to find it. The problem is – it ain’t always as straightforward as it seems. Not only do you have to consider all of the aspects in a given situation that might affect the outcome, there’s almost certainly a whole slew of stuff you haven’t thought of or may not even appreciate the importance of. It’s exactly the situation demonstrated by a previous blog < https://gy8.eu/blog/think-piece/singled-out/> and that’s something that really matters. When it comes to audio, we know a lot less than we think we do. How much of what we accept as standard practice now wasn’t even known about ten or twenty years ago? How might such yet to be discovered issues affect our current judgements? Recent experience has demonstrated just how involved direct comparisons are – and how much you need to qualify the results. It simply isn’t as straightforward as pretty much everybody assumes…