If you've ever had the embarrassing experience of suddenly realizing that you don't know what you're talking about as you were trying to explain something to someone else, you might have first-hand knowledge of what psychologists call the illusion of explanatory depth. I say might because there are many ways to not know what you're talking about. The illusion of explanatory depth is just one of those ways. It occurs only with respect to explanatory knowledge — the kind of knowledge that involves causal patterns.
Taking care not to fall victim to this illusion is important in the modern workplace, where so many of us must explain to others why we do so much of what we do. To manage that risk, we must understand when the illusion is most likely to form and how the different kinds of explanatory knowledge are affected.
When the illusion is most likely to appear
The There are many ways to not know what
you're talking about. The illusion of
explanatory depth is just one way.illusion has been observed only in self-assessment with respect to "knowing why" (explanatory knowledge). For example, most of us know that hundreds of human-made satellites orbit the Earth. But few of us can explain why they don't all immediately fall into the oceans or crash into the land.
The illusion hasn't been observed experimentally with respect to all kinds of knowledge. For example, the illusion doesn't occur with respect to procedural knowledge. Procedural knowledge is the kind of knowledge that pertains to how we perform a particular task, such as administering a COVID-19 vaccination to a patient, or deleting a file from a computer, or gaining approval, in your organization, for a capital purchase of more than $50,000.
Nor have we observed the illusion of explanatory depth with respect to descriptive knowledge, which is knowledge of specific facts or propositions. Descriptive knowledge includes, for example, the names of the bones of the human hand, or where to find the Sort command on the ribbon of Microsoft Word, or the names of the signers of the U.S. Declaration of Independence.
To say that the illusion of explanatory depth hasn't been observed with respect to procedural knowledge or descriptive knowledge isn't to imply that humans are at ease with acquiring or retaining those kinds of knowledge. It does mean that we are less likely to be mistaken in self-assessment with respect to knowledge that consists of "knowing how" (procedural knowledge), or "knowing that" (descriptive knowledge), than we are with respect to "knowing why" (explanatory knowledge).
Kinds of explanatory knowledge
Researchers have identified four categories of explanatory knowledge.
- Knowledge relating to causal patterns
- Explanatory knowledge of the first category relates to causal patterns among the entities whose behavior is being explained. And there are four types of causal patterns: common cause, common effect, linear causal chains, and causal homeostasis. Common-cause patterns appear frequently in diagnosing the misbehavior of systems. Debugging code is a fine example, in which multiple forms of misbehavior can be traced to a single cause.
- Common-effect explanations appear when we try to explain the behavior of complex systems. For example, the causes of the Chernobyl nuclear accident include human error, but the design of the reactor made it inherently difficult to manage under low power conditions.
- Linear causal chains are a form of common-cause explanation that are also common-effect explanations, in which a single cause leads to a single effect through a chain of other single causes. An example is the explosion of Space Shuttle Challenger, in which one might identify a linear causal chain including the failure to notice O-ring erosion in previous launches, the decision to launch in cold weather, and the design of the O-rings. [Rogers 1986]
- Causal homeostatic explanations focus on reasons why a system state, or a given set of system attributes, might persist over time. For example, if a system software module is repeatedly implicated in system failures, even when those failures are otherwise unrelated, a causal homeostatic explanation might point to the general disorganized state of the code, or its lack of a modular design.
- Awareness of these four categories of causal patterns can be enormously useful as a framework for seeking causal patterns in new explanations.
- Knowledge relating to explanatory stances
- Keil surveys the literature of another way of categorizing explanations that he refers to as stances or modes. [Keil 2006] Three stances are the mechanical stance, the design stance, and the intentional stance. In the mechanical stance, we focus on how mechanical objects interact. For example, in the game of tennis, two keys for imparting topspin to the ball are keeping the racket face slightly closed, and brushing up on the back of the ball.
- In the design stance, our explanations focus on purpose. For example, the counterweight of an elevator reduces the torque required of the motor that lifts the elevator cab from the first floor to the second.
- In the intentional stance, we attribute beliefs and desires to the (usually inanimate) entities whose behavior we're explaining. For example, the reason why we cannot load into Microsoft Excel two workbooks with the same name is that Excel uses the filename to distinguish the workbooks. If two workbooks had the same name, Excel would get confused.
- Any given explanation might have properties of more than one stance. But to my mind, choosing a stance and adhering to it offers the best chance of achieving clarity.
- Knowledge relating to domains of phenomena
- The causal patterns that are relevant for a given domain of phenomena vary with the domain. For example, when explaining why people might not respond truthfully to workplace surveys, we must understand what kinds of survey questions are likely to be affected by the prevailing degree of psychological safety. But psychological safety is unrelated to how a bicycle works.
- When it comes to explanations, different domains of phenomena require different knowledge. And knowing what knowledge is relevant can be the hard part of the explanation. For example, in assessing the progress of an Agile Transformation by fielding a survey, expertise in Agile processes can be less important than expertise in psychological safety.
- Knowledge relating to value-laden or emotion-laden situations
- Explaining the behavior of others can involve attributing it to complex networks of values, social norms, and emotions. These factors can shift the "threshold for acceptance" of explanations based on social norms and emotions. [Rozenblit 2002]
- For example, in explaining why a team member felt insulted when omitted from a special meeting, we would need to invoke an understanding of the team norms about invitation lists for meetings. An explanation that fails to invoke that understanding would likely be unacceptable to many team members.
Last words
Watch carefully for examples of this illusion in action. One way to learn recovery techniques is by watching how other people recover from realizing they don't actually know as much as they thought they did. Top Next Issue
Is every other day a tense, anxious, angry misery as you watch people around you, who couldn't even think their way through a game of Jacks, win at workplace politics and steal the credit and glory for just about everyone's best work including yours? Read 303 Secrets of Workplace Politics, filled with tips and techniques for succeeding in workplace politics. More info
Footnotes
Your comments are welcome
Would you like to see your comments posted here? rbrenjTnUayrCbSnnEcYfner@ChacdcYpBKAaMJgMalFXoCanyon.comSend me your comments by email, or by Web form.About Point Lookout
Thank you for reading this article. I hope you enjoyed it and found it useful, and that you'll consider recommending it to a friend.
This article in its entirety was written by a human being. No machine intelligence was involved in any way.
Point Lookout is a free weekly email newsletter. Browse the archive of past issues. Subscribe for free.
Support Point Lookout by joining the Friends of Point Lookout, as an individual or as an organization.
Do you face a complex interpersonal situation? Send it in, anonymously if you like, and I'll give you my two cents.
Related articles
More articles on Cognitive Biases at Work:
- The Ultimate Attribution Error at Work
- When we attribute the behavior of members of groups to some cause, either personal or situational, we
tend to make systematic errors. Those errors can be expensive and avoidable.
- The Stupidity Attribution Error
- In workplace debates, we sometimes conclude erroneously that only stupidity can explain why our debate
partners fail to grasp the elegance or importance of our arguments. There are many other possibilities.
- The Rhyme-as-Reason Effect
- When we speak or write, the phrases we use have both form and meaning. Although we usually think of
form and meaning as distinct, humans tend to assess as more meaningful and valid those phrases that
are more beautifully formed. The rhyme-as-reason effect causes us to confuse the validity of a phrase
with its aesthetics.
- The Trap of Beautiful Language
- As we assess the validity of others' statements, we risk making a characteristically human error —
we confuse the beauty of their language with the reliability of its meaning. We're easily thrown off
by alliteration, anaphora, epistrophe, and chiasmus.
- Seven More Planning Pitfalls: I
- Planners and members of planning teams are susceptible to patterns of thinking that lead to unworkable
plans. But planning teams also suffer vulnerabilities. Two of these are Group Polarization and Trips
to Abilene.
See also Cognitive Biases at Work and Cognitive Biases at Work for more related articles.
Forthcoming issues of Point Lookout
- Coming January 22: Storming: Obstacle or Pathway?
- The Storming stage of Tuckman's model of small group development is widely misunderstood. Fighting the storms, denying they exist, or bypassing them doesn't work. Letting them blow themselves out in a somewhat-controlled manner is the path to Norming and Performing. Available here and by RSS on January 22.
- And on January 29: A Framework for Safe Storming
- The Storming stage of Tuckman's development sequence for small groups is when the group explores its frustrations and degrees of disagreement about both structure and task. Only by understanding these misalignments is reaching alignment possible. Here is a framework for this exploration. Available here and by RSS on January 29.
Coaching services
I offer email and telephone coaching at both corporate and individual rates. Contact Rick for details at rbrenjTnUayrCbSnnEcYfner@ChacdcYpBKAaMJgMalFXoCanyon.com or (650) 787-6475, or toll-free in the continental US at (866) 378-5470.
Get the ebook!
Past issues of Point Lookout are available in six ebooks:
- Get 2001-2 in Geese Don't Land on Twigs (PDF, )
- Get 2003-4 in Why Dogs Wag (PDF, )
- Get 2005-6 in Loopy Things We Do (PDF, )
- Get 2007-8 in Things We Believe That Maybe Aren't So True (PDF, )
- Get 2009-10 in The Questions Not Asked (PDF, )
- Get all of the first twelve years (2001-2012) in The Collected Issues of Point Lookout (PDF, )
Are you a writer, editor or publisher on deadline? Are you looking for an article that will get people talking and get compliments flying your way? You can have 500-1000 words in your inbox in one hour. License any article from this Web site. More info
Follow Rick
Recommend this issue to a friend
Send an email message to a friend
rbrenjTnUayrCbSnnEcYfner@ChacdcYpBKAaMJgMalFXoCanyon.comSend a message to Rick
A Tip A Day feed
Point Lookout weekly feed