Why We Make Mistakes, Joseph T. Hallinan

P. 2

To Err Is 90 Percent Human

…. We all know the cliché “To err is human.” And this is true enough. When something goes wrong, the cause is overwhelmingly attributed to human error: airplane crashes (70 percent), car wrecks (90 percent), workplace accidents (also 90 percent). You name it, and humans are usually to blame. And once a human is blamed, the inquiry usually stops there. But it shouldn’t—at least not if we want to eliminate the error.

In many cases, our mistakes are not our fault, at least not entirely. For we are all afflicted with certain systemic biases in the way we see, remember, and perceive the world around us, and these biases make us prone to commit certain kinds of errors.

P. 5

The misattribution of blame is one reason we make the same mistakes over and over again. We learn so little from experience because we often blame the wrong cause. When something goes wrong, especially something big, the natural tendency is to lay blame. But it isn’t always easy to figure out where the fault lies. If the mistake is big enough, it will be analyzed by investigators who are presumed to be impartial. But they are plagued by a bias of their own: they know what happened. And knowing what happened alters our perception of why it happened—often in dramatic ways. Researchers call this effect hindsight bias. With hindsight, things appear obvious after the fact that weren’t obvious before the fact. We learn so little from experience because we often blame the wrong cause. This is why so many of our mistakes appear —in hindsight—to be so dunderheaded. (“What do you mean you locked yourself out of the house again?” ) It’s also why so many of the “fixes” for those mistakes are equally dunderheaded. If our multitasking driver wrecks the car while fiddling with the GPS device on the dashboard, the driver will be blamed for the accident. But if you want to reduce those kinds of accidents, the solution lies not in retooling the driver but in retooling the car.

Posted in From other books | Leave a comment

Why We Make Mistakes, Joseph T. Hallinan

P. 25

Meaning Matters; Details Don’t
Why should we remember faces, but not the names that go with them? Part of the answer is that when it comes to memory, meaning is king.

28 Names, it turns out….don’t mean much, and as a consequence we tend to forget or confuse them.

34
Yet, as overloaded as we are with things to remember, we often persist in picking hiding places we are doomed to forget. In one survey, more than four hundred adults were asked whether they had recently found an object that they had lost or misplaced. Of those who had recalled such a recent episode, 38 percent reported finding the item in a place that was not “logical.” Why would such a high percentage of lost items be found in illogical places? Researchers concluded that people mistakenly believe that the more unusual a hiding place is, the more memorable it will be. But the opposite turns out to be true: unusualness doesn’t make a hiding place more memorable—it makes it more forgettable.

Posted in From other books | Leave a comment

Why We Make Mistakes, Joseph T. Hallinan

P. 64

Hindsight Isn’t Twenty-Twenty

…. In fact, one of the most significant sources of human error is hindsight bias Basically, hindsight bias comes down to this: knowing how things turned out profoundly influences the way we perceive and remember past events. This is true no matter how trivial they may be. …knowing how the event turned out alters our recollection of it.

Even historians are prone to this error. It is much easier after an event—whether it is the Battle of Gettysburg or the bombing of Pearl Harbor—to sort the relevant factors from the irrelevant. Those who write of these events will almost invariably give the outcome an appearance of inevitability. But this type of compelling narrative is achieved by suppressing some facts at the expense of others—a process known as creeping determinism.

Near the conclusion of her influential history of the attack on Pearl Harbor, for instance, the noted military analyst Roberta Wohlstetter had this to say: “After the event, of course, a signal id always crystal clear; we can now see what disaster it was signaling, since the disaster has occurred. But before the event it is obscure and pregnant with conflicting meanings.”

Posted in From other books | Leave a comment

P. 78

“Multitasking” is a term cribbed from the computer world; it describes a technique by which a computer can split up its work into many processes or tasks. This allows us to, say, run Microsoft Word while downloading something from the Internet. Most of us think our brains can work in the same way. Indeed, multitasking has become the hallmark of the modern workplace. Gloria Mark, a professor at the University of California, Irvine, who studies multitasking in the workplace, recently conducted a field study of employees at an investment management company on the West Coast. She and a colleague watched as the workers went about daily tasks in their cubicles; they noted every time the workers switched from one activity to another—say from reading an e-mail that popped up in their inbox to making a phone call to jotting something down on a Post-it note. They found that the workers were frequently interrupted—on average, about twenty times an hour. This means the employees were, on average, able to focus on one task for no more than about three minutes.

But multitasking is one of the great myths of the modern age. Although we think we are focusing on several activities at once, our attention is actually jumping back and forth between the tasks. Not even a computer, by the way, can multitask; it actually switches back and forth between tasks several thousand times per second, thus giving the illusion that everything is happening simultaneously.*

*Some modern computers do have multiple processors, and these truly do allow a computer to perform multiple tasks at the same time; like a person with two or more heads, each processor can work (or perform) independently. But in the old days, when the term “multitasking” was coined, computers had just a single processor.

P. 79

Our minds provide us with the same illusion, but not, unfortunately, the same results. There is no such thing as dividing attention between two conscious activities. Under certain conditions we can be consciously aware of two things at the same time, but we never make two conscious decisions at the same time—no matter how simple they are. Sure, you can walk and chew gum at the same time. And you can drive and talk to a passenger at the same time, too—but only after so much practice that the underlying activity (walking or driving) becomes almost automatic. But we don’t practice most of our day-to-day activities nearly enough for them to become automatic. The next time you’re at a restaurant, for instance, try carrying on a conversation with your dinner guests while trying to figure the tip on the bill.

Multitasking = Forgetting

Indeed, the gains we think we make by multitasking are often illusory. That’s because the brain slows down when it has to juggle tasks. We gain nothing, for instance, by ascending the stairs two steps at a time if the additional effort slows us down so much that we end up taking as long to climb them as we would if we had taken them just one step at a time. In essence, this is what often happens when we try to perform two mental tasks simultaneously. In one experiment, researchers asked students to identify two images: colored crosses and geometric shapes, like triangles. Seems simple enough, right? When the students saw colored crosses and shapes at the same time, they needed almost a full second of reaction time to press a button—and even then they often made mistakes. But if the students were asked to identify the images one at a time—that is, the crosses first, then the forms—the process went almost twice as quickly.

Switching from task to task also creates other problems. One of them is that we forget what we were doing—or planned to do. That to-do list in our brains is known as working memory; and it keeps (80) track of all the short-term stuff we need to remember, like the e-mail address someone just mentioned to us. But the contents of our working memory can evaporate like water in a desert; after only about two seconds, things begin to disappear. And within fifteen seconds of considering a new problem, researchers have shown, we will have forgotten the old problem. In some cases, the forgetting rate can be as high as 40 percent. This obviously presents the potential for big mistakes…..

Another cost is downtime. When we’re working on one thing and are interrupted to do another thing, it takes us a while to refocus on what we were originally working on. Workplace studies have found that it takes up to fifteen minutes for us to regain a deep state of concentration after a distraction such as a phone call. These findings square with what researchers found when they looked at the work habits of employees at Microsoft. In that study, a group of Microsoft workers took, on average, fifteen minutes to return to serious mental tasks, like writing reports or computer codes, after responding to incoming e-mails. Why so long? They typically strayed off to reply to other messages or browse news, sports, or entertainment on Web sites.

So long as such distractions are confined to our cubicles, most of us are probably safe. But in the real world, researchers are discovering, (81) multitasking can be quite dangerous. Take something as simple as talking on your cell phone while driving. In 1999, the U.S. Army studied what effect this has on driving ability. Its conclusion? “All forms of cellular phone usage lead to significant decreases in abilities to respond to highway traffic situations.”

This was especially true, the Army noted, for older drivers. Age, it found, plays a significant role in the distracting effect of cellular phone conversations. The older we are, the harder it becomes to screen out distractions. And you don’t have to be that old before this ability declines: the dropoff is noticeable after the age of forty.

Bridge? What Bridge?

Even more worrisome, divided attention can produce a dangerous condition known as inattentional blindness. In this condition it is possible for a person to look directly at something and still not see it. The effect was noted by researchers in the early 1990s; in separate experiments, they found that a surprising number of participants were completely unaware of certain objects presented to them in visual tests. This tendency held true not only when the presented objects were small but when they were large and, presumably, quite obvious.

Posted on by jlrodgers | Leave a comment

Why We Make Mistakes, Joseph T. Hallinan

P. 92

How We Frame Issues

….A great many day-to-day errors come about because we frame, or look at, an issue in the wrong way.

93

Through a series of experiments they [Daniel  kahneman and Amos Tversky] demonstrated that how we frame an issue can greatly affect our response to it.

 

P. 95

Holding On to a Sure Thing

Kahneman and Tversky’s findings point to what seems to be a consistent pattern in our decision making. In situations where we expect a loss, we are prone to take risks. When the disease example above, for instance, is framed in terms of deaths, we choose the risky alternative where there is at least some prospect of saving everyone. But when we are considering gains, we become more conservative; we simply want to hold on to a sure thing.

This pattern seems to stem in part from the human approach to risk perception.

“There are two systems for analyzing risk: an automatic, intuitive system and a more thoughtful analysis,” says Paul Slovic, professor of psychology at the University of Oregon. “Our perception of risk lives largely in our feelings, so most of the time we’re operating on system No. I.”

P. 97

Framing and Money

….most of us don’t get out a calculator and tally the risk of various options in mathematical terms. We rely, as Paul Slovic put it, on system No. 1 — we want to know how risky an investment seems. And that assessment, in turn, often depends on how our potential investment is framed.

P. 98

How Time Affects Our Decisions

Many factors can affect the way we frame our decisions. One of the least obvious is time. When the consequences of our decisions are far-off, we are prone to take bigger gambles; but when consequences are more immediate, we often become more conservative….

Time constraints have been shown to affect our decisions in other ways. After the terrorist attacks of September 11, 2001, for instance, time horizons for many people in the United States shortened. People, especially those in big cities like New York, increasingly adopted a “live for the day” attitude. Activities with long-term benefits, like diet and exercise, were out; treating oneself well in the here and now was in. One result: the diet chain Jenny Craig reported “a huge wave of cancellations.”

Timing even affects our choices about the food we eat, the clothes we buy, and the movies we watch.

P. 110

We Skim

Overlooked mistakes are so common…they are called “proofreader’s errors.” …these humdrum errors reveal some interesting quirks about the way human perception works. Perception, above all, is economical; we notice some things and not others. This means that our attention is not always as equally distributed as we might think….

Indeed, this tendency is found often enough that it suggests a second, closely related principle: we skim. And the better we are at (111) something, the more likely we are to skim.

This tendency has profound implications for understanding why we don’t detect many of our errors: as something becomes familiar, we tend to notice less, not more. We see things not as they are but as (we assume) they ought to be.

P. 114

The Importance of Context

…we rely on context to guide our perception of everyday events. Context is the great crutch: we lean on it much more than we know….

Encountering something or someone out of context makes recognition far more difficult; it becomes much harder to place a face.

Posted in From other books | Leave a comment

Why We Make Mistakes, Joseph T. Hallinan

P. 130

How Much of What You Say Is True?

The most common alteration was omitting important details, which was reported in 36 percent of the stories. Exaggeration and minimization occurred about equally, appearing in 26 percent and 25 percent of the stories. And 13 percent of the stories contained outright fabrication— information that was not part of the original event. Moreover, students tailored their stories not only to their audience but, importantly, to their purpose. With stories told to convey information, for instance, students tended not to exaggerate—though they did tend to minimize and omit important details. With stories told to entertain, on the other hand, the students tended to do the opposite: they would exaggerate and add details, but not minimize or omit important information.

Lying—or “Impression Management”?

What could account for so much fibbing?

Part of the explanation, Tversky believes, lies in the assumptions we make about the purpose of the stories we hear.

p. 131

“We have this Anglo-Saxon idea that talk is about information,” Tversky told me one autumn afternoon as she strolled through New York’s Central Park. But it’s not—at least not all of the time. Instead, she said, think of conversation not as a means of truth telling but as a form of behavior designed to achieve a particular end.

We think conversation is about imparting information—but it’s not. Sometimes it’s a form of impression management.

“If you think about talk as a behavior, we behave in ways that will make people think certain things or act in certain ways toward me—to like me, or to think that I’m a smart person or a strong person or whatever.”

In this sense, she said, the purpose of conversation isn’t to convey the truth—it’s to create an impression. So accuracy tends to take a backseat to impression management. 

Posted in From other books | Leave a comment

Why We Make Mistakes, Joseph T. Hallinan

P. 138

Many studies over the years have shown that men and women perceive and remember aspects of their lives in different ways— often from a very young age—and that the roots of some of our mistakes can be traced back, at least in part, to these differences in perception and memory. Take, for instance, the way men and women perceive risk. Across a variety of areas, women have been shown to be more risk averse than men—a finding that appears to be reflected in the Army’s friendly-fire study. When the female soldiers were confronted with a risky situation—shoot or don’t shoot—they typically chose the more risk-averse option: don’t shoot.

P. 139

In particular, they have focused on five types of risks:

1. Financial
2. Health and safety
3. Recreational
4. Ethics
5. Social

A few years ago they gave questionnaires to more than five hundred men and women, from teenagers to people in their mid-forties. For each category of risk, the people in the study were asked roughly twenty questions…. They were asked to answer each question by assigning it a risk rating on a scale from to 5, with I being “not at all risky” and 5 being “extremely risky.”

In four of the five areas examined, Weber found that women appeared to be significantly more risk averse than men. (The one exception was the area of social risk.) Men were also significantly more likely to engage in the most risky behaviors than were women (again, with the exception of social risk).

The interesting question, of course, is: Why? To find out, Weber and her colleagues asked their subjects, in effect, to provide a cost-benefit analysis of each type of activity. How much risk did they perceive to be involved? And how much benefit did they think that amount of risk would bring them? When she analyzed the answers, Weber found something surprising: men weren’t necessarily more risk seeking; they just valued the benefits of that risk more (140) than the women did (the one exception, again, being the social category).

…. But the perceived benefits of such an activity, she found, can be quite different, and this difference in perception can often explain why women won’t take some chances that men will: they think they’re not worth the risk.

Lying and Lottery Tickets

Men and women not only perceive some aspects of the world differently; they often perceive themselves differently. When it comes to making mistakes, for instance, women appear to be harder on themselves than men are. For example, studies have shown that men tend to forget their mistakes more readily than women do. And mistakes appear to dog women in ways that do not bother men. In interviews, for instance, women indicate that situations involving failure affect their self-esteem more than do situations involving success; no such difference has been reported for men.

For many traits women have also been found to be less optimistic (or perhaps more realistic) than men….(141) Even when they tell lies, men and women have been shown to lie in different ways. College men tell more lies about themselves…tending to exaggerate their plans and achievements…. College women, on the other hand, tend to lie to enhance another person.

P. 141

A Computer Error

Other types of gender-related errors are less obvious. Take, for instance, the way we use computers. Like math and war, the computer world is male dominated. After peaking in 1985 at 37 percent, the share of bachelor’s degrees in computer science awarded to women has steadily fallen. Today, women receive just over 22 percent of them, or about one out of every five.

This gap intrigues the Microsoft employee Laura Beckwith, who herself recently obtained a Ph.D. in computer science. Beckwith specializes in studying the way people use computers to solve everyday problems. A few years ago she noticed that men were more likely than women to use advanced software features, especially ones that help users find and fix errors. This process of (142) fixing errors is known as debugging, and it’s a crucial step in building software programs that work.

Beckwith thought this gap could be explained not so much by a difference in ability as by a difference in confidence. When it comes to solving problems, a lack of confidence has been shown to affect not only the outcome we achieve but the approach we take. This is a subtle difference, but an important one. Among other things, self-doubters are slower to abandon faulty strategies and less likely to come up with alternatives: they stay the course.

So Beckwith, with the help of colleagues, devised a test of her own. First, she tested the confidence levels of a group of men and women by asking them whether they thought they could find and fix errors in spreadsheets filled with formulas. Then she sat them down in front of computers and had them do exactly that, working against the clock.

The key to success was using the debugging feature of the spreadsheet software. But Beckwith found that only those women who believed they could do the task successfully—that is, only those with high confidence—used the automated debugging tools. The women with lower confidence, on the other hand, relied on what they knew, which was editing the formulas one by one. This approach actually ended up introducing more bugs into the system than when they started.

This was puzzling. Beckwith knew from questionnaires handed out after the test that the women understood how the debugging tools were supposed to work—yet many of the women chose not to use them. Why? Once again the answer comes down to the ways men and women perceive risk. When the women Beckwith’s study did their own private cost-benefit analysis, (143) many of them concluded that the risk of making a mistake by using the debugging tools was not worth the potential reward of fixing the bugs.

p. 149

We All Think We’re Above Average

Not long ago a Princeton University research team asked people to estimate how susceptible they and “the average person” were to a long list of judgmental biases. Most of the people claimed to be less biased than most people. Which should come as no surprise: most of us hate to think of ourselves as average—or, God forbid, below average. So we walk around with the private conceit that we are above average, and in that conceit lies the seed of many mistakes.

“Overconfidence is, we think, a very general feature of human psychology,” says Stefano Della Vigna, a professor of economics at the University of California, Berkeley…. And his research has led him to a general conclusion: “Almost everyone is overconfident—except the people who are depressed, and they tend to be realists.”

Posted in From other books | Leave a comment

Why We Make Mistakes, Joseph T. Hallinan ***P. 162*** Summaries

162 Information Overload

What might explain the persistence of such an illusion? Part of the answer lies in the beguiling power of information. The more we read (or see or hear, for that matter), the more we think we know. But, as has long been observed, that isn’t necessarily so. Often what happens is that we don’t grow more informed; we just grow more confident.

Summaries of information, for instance, often work as well as — and sometimes even better than — longer versions of the same material.

Posted in From other books | Leave a comment

Why We Make Mistakes, Joseph T. Hallinan

p. 180

One-Trick Ponies

Another problem with the bushwhack approach is that people tend to be one-trick ponies. If we learn to do something a certain way, we tend to stick with it. Psychologists refer to this mental brittleness as “functional fixity.”

181 …tried out the…problems on a fresh group of subjects…. Nearly all of them figured out the simpler way…. The conclusion from these experiments was obvious: People in the initial experiments had become so set in their ways that they were blinded to the newer, simpler solution. But to those who came to the problem fresh, the simpler solution was obvious.

p. 183

We Don’t Constrain Ourselves

One way to reduce errors is by introducing constraints. What are constraints? Essentially, they’re simple mental aids that keep us on the right track by limiting our alternatives. I like to think of them as”bumpers” that nudge us back on course. But another way to think of them is as error blockers.

Posted in From other books | Leave a comment

Why We Make Mistakes, Joseph T. Hallinan

p. 189

The lesson here should be obvious: simplify where you can, and build in constraints to block errors.

Looking for Root Causes

…mistakes attributed to human error often have deeper roots elsewhere. This is one reason why we so often fail to learn from our mistakes: we haven’t understood their root causes.

190
In the case of human error, root cause analysis requires a deep understanding of human motivation. As we have seen … we believe we will act in one way, but often act in another—even in ways that would appear to be against our own self- interest. Even worse, many of us don’t know when we’re being biased. Our judgments may be distorted by overconfidence or by hindsight or by any of the other tendencies we’ve talked about.

191
Knowing Where to Look

Identifying the source of an error also requires knowing where and how to look. After something goes wrong, we tend to look down — that is, we look for the last person involved in the chain of events and blame him or her for the outcome. But this approach, satisfying though it may be, usually doesn’t stop an error from being repeated….and by separate people. If multiple people make the same mistake, then that should tell us something about the nature of the mistake being made: its cause probably isn’t individual but systemic. And systemic errors have their roots at a level above the individual. Which is why, when looking for the source of errors, it pays to look up, not down.

Posted in From other books | Leave a comment