You are reading the older HTML site
Your boss sleeps with his secretary. The guy isnt married, hence the ethical ramifications dont overly trouble you. When you first found out, it came as a surprise, as the affair had been conducted very discretely, but of late, love and business have conflicted in overt ways. Theres the manner of the extravagant, out-of-season raise. Detractors mysteriously not returning to work. Then theres her new officeits three times the size it needs to be, while you and your co-workers have been clamoring for better accommodations for years. And whats up with the excessive sick leaves? They wouldnt be connected to the mysteriously escalating level of her wardrobe, jewelry, and suntan, would they?
You didnt used to have this problem. What your boss did in his private life was none of your business. Live and let live. Now, however, his private and public life intersect in ways that are becoming increasingly hard to tolerate. If you werent so hooked on your position and pay scale, youd peruse the classifieds for a new gig. Sound familiar? Dont say no too quickly. If you read a certain audio magazine that publishes measurements, you could find yourself in the midst of a murky mess in which measurements and politics intersect.
There are measurements, and there are interpretations of what those measurements signifypresumably signify, one hastens to add, when closer inspection reveals that these interpretations are randomly, or not so randomly, weighted, and thats where the politics come in. What do you say, for example, when you find that a speaker whose maker publishes 89dB sensitivity measures 94dB, while in the same issue another manufacturers speaker, also published as 89dB, measures 83? High sensitivity in speakers is an admirable trait, as it poses less work for the partnering amplifier. Any firm with a two-way tower and unusually high sensitivity tends to openly promote high efficiency as a feature, but this one doesnt. If it were true, dont you think they would foam at the mouth and nail their success to your forehead with a sledgehammer-sized website and ad campaign?
Say your test gear measured optimistically. Why didnt it apply the same bonus "headroom" to the other speaker, the one that measured more poorly than it should have? That the first maker is an established playerand their speaker is on the coverwhile the second is a relative newcomer, wouldnt, perchance, have anything to do with it? Why even indulge in such ugly conjecture? Isnt this just more tired conspiracy theory? No, because there are clear indications of undue flexibility between so-called hard measurements on the one hand (which may be less hard than the optimistic brigade have a right to insist upon) and their interpretation on the other.
A high-profile amplifiers time domain behavior doesnt look as good as its claimed superiority would predict. The commentary waves it away as insignificant, not audible and hence of no concern. This begs the questionif time domain behavior was of merely academic interest, why spend money and time measuring it in the first place? When you encounter a veritable maze of zigzags whereby the commentary accompanying other time domain measurements pronounces them as inaudible and academic in some instances, but audible and thus troublesome in others, shouldnt you be excused for wrinkling your brow?
You find highly lauded loudspeakers with waterfall plots so terrible that anyone knowing how to read them would call the associated product a ringing mess. Surprisingly, the commentary doesnt mention it. Instead, those areas in the measurements that conform to expectations and standards (and perhaps conveniently correlate with the independent reviewers written findings) are emphasized. There are speakers with obviously flawed impulse responses, yet the commentary applauds their coherence. The list goes on, but by and large it remains invisible to the audience.
After all, most readers do not know how to interpret the published graphs. A commentary is truly useful, and necessary to bridge the gap between engineering savvy and hobbyist enthusiasm, if, from a great and uninvolved distance, it is rigorously and coldly applied, with a single set of standards. But once there are indications of a sliding scale of standards (I believe "double standard" is the operative term), the highly prized halo of objectivity inherent in measurements becomes uncomfortably tainted.
Once this occurs, the impersonal blade that cuts sharply, cleanly, and predictably becomes a far more dubious weapon by which to inflict pain or dole out choice cuts of prime rib, depending on the motivations behind the hand wielding said blade. Take one or two years worth of back issues and focus on one category of reviewloudspeakers, amps, it doesnt matter. Note the discrepancies between the commentarys minor or major spins applied to the ultimate conclusion, vis-à-vis looking at the graphs to extrapolate what they mean sans commentary. Note the power of, say, "A highly commendable set of measurements for a very affordable product" versus "Im deeply troubled by the amount of higher-order distortion in the upper midrange band." If these disparate conclusions were affixed to the same measurements, wouldnt you begin to question their usefulness if objectivity was the professed aim? If you think Im tossing words for the fun of verbal juggling, try systematically studying multiple issues. Cant correlate the commentary to the graphs because you dont know how to read them? You can still appreciate how a selective focus and canny use of emphasis shifts meaning. What was bad can become good. What was marginal can assume gravitas.
I hear from more and more manufacturers who view this subject with a great deal of concern. They grow weary of submitting gear for review. They know how it measures in their facility, and are increasingly uncertain about how it will measure elsewhere. Worse, they no longer feel assured that the measurements will be allowed to speak for themselves. Theyre beginning to suspect that the interpretations accompanying their graphs wont be neutral, but charged with personal polarity.
Evidence amassed for years shows that measurements often do not correlate with the reviewers findings. Some writers have acquired the habit of building cautionary self-protection into their articles: "While I predict that the test bench will find much to complain about, I still stand by my findings." How useful does that make the measurements? Rather, it allows the reviewer to stand tall on his objectivity soapbox, from whence he can belittle subjectivist reviewers and garner support for greater infallibility.
Is there a lurking preference for good measurements even though evidence suggests they may be rather meaningless? Should manufacturers tweak submitted components to insure they measure better and thus achieve positive and encouraging commentary? From the manufacturers perspective, this becomes an uncomfortably valid concern. What if he knows his gear wont make the grade on the test bench even though sales and owner commentary suggest it doesnt matter? Why would he want to risk public humiliation, or have a reviewers highly positive findings put into question?
The long and short of it? If you cannot decipher graphs for yourself, youre better off abstaining from the accompanying commentary as well. Stick with the ears, not the eyes. That is to say, stick with what the guy who did the actual listening had to say, not the one applying load resistors or microphone placement decisions. If you dont, you could become caught on the firing line where measurements and politics intersect. Seeing that you wont see the bullets comingfor that youd need to compare the graphs against their interpretationsyoud be defenseless, vulnerable, and impressionable. Why put yourself in harms way?