Quantitive analysis, useful though it be, is frequently problematic. Grades are an excellent example.
Imagine an airline, XYZ Airlines, advertising “Fly XYZ, where 9 out of ten planes land safely!” 90% -- an A minus! Pretty good, no? Or how about this: “Eat at Joe’s , where only 1 in 10 people get sick.” Imagine your favorite music. What would happen if the musicians played one of every 100 notes incorrectly? Get those bozos off the stage!
Suppose you buy a toaster. When you get it home, will you accept that it “mostly” makes toast? Or “eventually” makes toast? Probably not--if it doesn’t make perfect toast on demand, you’ll be back at the store with attitude. And when you’re there, will you care that they worked really hard on it? Or that they had a difficult week? I think not. Here’s the problem with clinging to grades, to numbers—yet people do.
That is, as long as it suits them. When a management consultant team moves into the workplace and purports to measure performance, the employee response is generally “You can’t really measure what we do.” As Kenneth Blanchard puts it, “Well, if we can’t measure any difference, then why don’t we just eliminate it?” People get interested in measuring performance quickly.
On the other hand, all too often a “survey” or “study” is reduced to a questionnaire. Easy to collate data, yes, but such approaches, all to common, also amount to little more than leading questions. The form’s authors assume the position they’ll find, thereby dictating them. Thus, what should have been a learning experience becomes nothing more than self-affirment, not merely ignoring, but preventing all unforeseen views—and the messy, time-consuming process of collecting that information.
What’s the point of setting out to learn what you think you already know?
Excellence and its measurement don’t match.