pf logo

audiodiscourse.jpg (10290 bytes)


1.jpg (46543 bytes)

Auroville 7
by Srajan Ebaen

Your boss sleeps with his secretary. The guy isn’t married, hence the ethical ramifications don’t overly trouble you. When you first found out, it came as a surprise, as the affair had been conducted very discretely, but of late, love and business have conflicted in overt ways. There’s the manner of the extravagant, out-of-season raise. Detractors mysteriously not returning to work. Then there’s her new office—it’s three times the size it needs to be, while you and your co-workers have been clamoring for better accommodations for years. And what’s up with the excessive sick leaves? They wouldn’t be connected to the mysteriously escalating level of her wardrobe, jewelry, and suntan, would they?

You didn’t used to have this problem. What your boss did in his private life was none of your business. Live and let live. Now, however, his private and public life intersect in ways that are becoming increasingly hard to tolerate. If you weren’t so hooked on your position and pay scale, you’d peruse the classifieds for a new gig. Sound familiar? Don’t say no too quickly. If you read a certain audio magazine that publishes measurements, you could find yourself in the midst of a murky mess in which measurements and politics intersect.

There are measurements, and there are interpretations of what those measurements signify—presumably signify, one hastens to add, when closer inspection reveals that these interpretations are randomly, or not so randomly, weighted, and that’s where the politics come in. What do you say, for example, when you find that a speaker whose maker publishes 89dB sensitivity measures 94dB, while in the same issue another manufacturer’s speaker, also published as 89dB, measures 83? High sensitivity in speakers is an admirable trait, as it poses less work for the partnering amplifier. Any firm with a two-way tower and unusually high sensitivity tends to openly promote high efficiency as a feature, but this one doesn’t. If it were true, don’t you think they would foam at the mouth and nail their success to your forehead with a sledgehammer-sized website and ad campaign?

Say your test gear measured optimistically. Why didn’t it apply the same bonus "headroom" to the other speaker, the one that measured more poorly than it should have? That the first maker is an established player—and their speaker is on the cover—while the second is a relative newcomer, wouldn’t, perchance, have anything to do with it? Why even indulge in such ugly conjecture? Isn’t this just more tired conspiracy theory? No, because there are clear indications of undue flexibility between so-called hard measurements on the one hand (which may be less hard than the optimistic brigade have a right to insist upon) and their interpretation on the other.

A high-profile amplifier’s time domain behavior doesn’t look as good as its claimed superiority would predict. The commentary waves it away as insignificant, not audible and hence of no concern. This begs the question—if time domain behavior was of merely academic interest, why spend money and time measuring it in the first place? When you encounter a veritable maze of zigzags whereby the commentary accompanying other time domain measurements pronounces them as inaudible and academic in some instances, but audible and thus troublesome in others, shouldn’t you be excused for wrinkling your brow?

You find highly lauded loudspeakers with waterfall plots so terrible that anyone knowing how to read them would call the associated product a ringing mess. Surprisingly, the commentary doesn’t mention it. Instead, those areas in the measurements that conform to expectations and standards (and perhaps conveniently correlate with the independent reviewer’s written findings) are emphasized. There are speakers with obviously flawed impulse responses, yet the commentary applauds their coherence. The list goes on, but by and large it remains invisible to the audience.

After all, most readers do not know how to interpret the published graphs. A commentary is truly useful, and necessary to bridge the gap between engineering savvy and hobbyist enthusiasm, if, from a great and uninvolved distance, it is rigorously and coldly applied, with a single set of standards. But once there are indications of a sliding scale of standards (I believe "double standard" is the operative term), the highly prized halo of objectivity inherent in measurements becomes uncomfortably tainted.

Once this occurs, the impersonal blade that cuts sharply, cleanly, and predictably becomes a far more dubious weapon by which to inflict pain or dole out choice cuts of prime rib, depending on the motivations behind the hand wielding said blade. Take one or two years’ worth of back issues and focus on one category of review—loudspeakers, amps, it doesn’t matter. Note the discrepancies between the commentary’s minor or major spins applied to the ultimate conclusion, vis--vis looking at the graphs to extrapolate what they mean sans commentary. Note the power of, say, "A highly commendable set of measurements for a very affordable product" versus "I’m deeply troubled by the amount of higher-order distortion in the upper midrange band." If these disparate conclusions were affixed to the same measurements, wouldn’t you begin to question their usefulness if objectivity was the professed aim? If you think I’m tossing words for the fun of verbal juggling, try systematically studying multiple issues. Can’t correlate the commentary to the graphs because you don’t know how to read them? You can still appreciate how a selective focus and canny use of emphasis shifts meaning. What was bad can become good. What was marginal can assume gravitas.

I hear from more and more manufacturers who view this subject with a great deal of concern. They grow weary of submitting gear for review. They know how it measures in their facility, and are increasingly uncertain about how it will measure elsewhere. Worse, they no longer feel assured that the measurements will be allowed to speak for themselves. They’re beginning to suspect that the interpretations accompanying their graphs won’t be neutral, but charged with personal polarity.

Evidence amassed for years shows that measurements often do not correlate with the reviewer’s findings. Some writers have acquired the habit of building cautionary self-protection into their articles: "While I predict that the test bench will find much to complain about, I still stand by my findings." How useful does that make the measurements? Rather, it allows the reviewer to stand tall on his objectivity soapbox, from whence he can belittle subjectivist reviewers and garner support for greater infallibility.

Is there a lurking preference for good measurements even though evidence suggests they may be rather meaningless? Should manufacturers tweak submitted components to insure they measure better and thus achieve positive and encouraging commentary? From the manufacturer’s perspective, this becomes an uncomfortably valid concern. What if he knows his gear won’t make the grade on the test bench even though sales and owner commentary suggest it doesn’t matter? Why would he want to risk public humiliation, or have a reviewer’s highly positive findings put into question?

The long and short of it? If you cannot decipher graphs for yourself, you’re better off abstaining from the accompanying commentary as well. Stick with the ears, not the eyes. That is to say, stick with what the guy who did the actual listening had to say, not the one applying load resistors or microphone placement decisions. If you don’t, you could become caught on the firing line where measurements and politics intersect. Seeing that you won’t see the bullets coming—for that you’d need to compare the graphs against their interpretations—you’d be defenseless, vulnerable, and impressionable. Why put yourself in harm’s way?