top of page

Jumping to Conclusions, the Fundamental Attribution Error, and Thinking, Fast and Slow

  • Writer: Joe
    Joe
  • Apr 20, 2020
  • 5 min read

If you’ve recently heard my elevator pitch, you’ll know I enjoy thinking about decisions. In the recent past, that’s been from the quantitative lens of analytics. But if you take a look at the books I’ve read, you’ll see a significant presence from the softer side of decision-making. For my spring semester at Sloan, I’m exploring this softer side in my classes. As part of this coursework, I’ve moved through a good number of business review articles discussing some behavioral aspect of organizations. Beyond the (awkward) choice to break up text into as many skinny columns as possible, these articles tend to draw inspiration from a common source: the research of Amos Tversky and Daniel Kahneman.


If you want a really in-depth look at their work, you should read Thinking, Fast and Slow. As I’ve written in the past, it’s a seminal book that will change the way you view the world.

For now, here’s my quick and dirty summary. Humans generally think in two ways. “System 1” is quick, intuitive, and builds on our cumulative experience to help us recognize things. When we see a red light, System 1 tells us to stop. “System 2” is more deliberate and powerful, but quite lazy. It’s the side of our brain that analyzes, questions assumptions, and “thinks through” things. The book goes through a host of examples (with plenty of empirical evidence) to support this understanding, painting a clear picture of how we often misuse System 1 techniques to solve System 2 problems.

We humans like to think that we naturally examine evidence before coming to conclusions. When we don’t, we “jump” to conclusions. In this blog post, I’m going to apply the TFS framework to this process to explore the structure of that jump. I’ll explain my thoughts on the Fundamental Attribution Error and how it’s a specific but illuminating example of this thought process at work. Along the way, I hope to develop a compelling argument that the majority of people (including you and me) underestimate how often they jump to conclusions.


What happens when we jump to conclusions? To the listener, it seems like someone comes to a conclusion before developing all of the facts, often doing so very quickly and intuitively. Jumping to conclusions is more of a reflex than a process, and falls squarely in System 1.

The simple way to test whether someone is jumping to conclusions is to have them justify the conclusion. In a world where facts are easy to verify, this is simple test is a great way to determine whether someone is jumping to conclusions. All it requires is the concluder to dip into System 2 to present a sound chain of reasoning, built on known and verified facts, that either terminates in their conclusion or doesn’t.

This is the mechanism behind math proofs. Mathematicians begin with a conclusion (theorem), which they seek to prove or disprove. Often, the prover has some reason to believe the theorem is true even before they have a solid proof for it. The process of the proof is then a backwards-looking way of developing a coherent, fully valid chain of reasoning for a conclusion. The prover must systematically consider the truth of each link, either by building upon previously justified conclusions (“theorems”) or proving them as they go.


The world outside of math is unfortunately far less convenient. In the world we live in, even verification of “simple” facts can be difficult. When you think about more nuanced relationships, it gets even fuzzier. This kind of fuzziness presents a perfect opportunity for a System 1 heuristic. Where the mathematician proves (or references proofs to) intermediate steps in reasoning, the average person in an average conversation just assumes their truth to be a natural consequence of their convenience. Now, this probably isn’t always the same heuristic at play. A quick glance at Wikipedia’s list of cognitive biases yields at least a few biases that could play this role (Regressive bias, egocentric bias, and the von Restroff effect immediately stick out from that list, though there are certainly more at play).

Here’s the synthesis: we jump to conclusions via cognitive biases and heuristics. And when we get to the conclusion, we often fill up the intermediate steps with more heuristics-driven logical links. This puts decision-makers in a dangerous place, perched upon a heap of biased and unjustified understandings.


These heuristics are most attractive when we're making quick decisions. In some cases, that's driven by external factors, requirements of our decision process. But there’s another, more insidious instance: when we feel strong (negative) emotions. A lot of these cases are driven by our interactions with other people. When we feel slighted or disrespected by someone's actions, we attribute to their personality or character, not to the surrounding circumstances. This is the Fundamental Attribution Error. The next domino to fall is our own impulsive actions. And when we’re making decisions regarding someone who we felt has recently (and intentionally) slighted us, the action suggested by our impulses is driven by heuristics that put us in an adversarial state. As a general rule then, emotionally charged decisions will push you to ignore your own interests to vengefully undermine the interests of others, even when that’s not the right thing or the best thing for your interests.


A toddler who's (probably) about to make an impulsive decision

Even if you take just a cursory look into previous experiences, you’ll probably see examples of this. Maybe there was a time you spent or sacrificed extra resources to get back at someone who had wronged you, when you could have let it go. Perhaps there was a time when you fought so hard for something you thought was owed to you that the cost exceeded the value of the outcome. Or on your recent commute, when you sacrificed safety and refused to let someone pass you because they’d cut you off a couple minutes earlier.

There’s a lot of research that suggests killing bad projects that are already underway is much harder than stopping them before they begin. As my friend Shawn noted the other day, this is largely an emotional attachment to something you’ve been close to for a while. But it doesn’t change whether it’s objectively good or bad to pursue that project.

Emotions are everywhere. The implication is that you cannot trust your first instincts when you’re angry. Negative emotions push you into decisions built on flimsy foundations. With greater gravity comes stronger emotions, which in turn exacerbates the risk of stronger consequences. Not only will your first instinct likely be wrong, it may set off a chain of events that are entirely at odds with your goals.

The effects of this are pretty commonly found in popular culture. It’s why we tell people to “take a lap” or “step back." It's why we admire Abraham Lincoln’s angry letters.

This doesn’t just end with anger. It continues to our other negative emotions. Anxiety. Worry. Fear. All of them push us into rash and poor decisions. In complicated situations, they offer overly simplistic solutions. Decision-makers, then, need to be constantly vigilant to these emotions, not in spite of the importance of the decision but because of it.

As the leader of an organization, it’s also your job to prevent these emotions from running away with important decisions. The processes you put in place could be as simple as training programs to help your team members recognize these impulsive emotional decisions, or as developed as access restrictions to safeguard your most important decisions. It also should inform your other processes, like how you make collaborative decisions (hint: adversarial processes are not the way to go) or decide on which projects to pursue and abandon.

 
 
 

Recent Posts

See All

Comments


Post: Blog2_Post

281-658-3686

Subscribe Form

Thanks for submitting!

©2018 by Joe Zaghrini. Proudly created with Wix.com

bottom of page