Impediments

Impediments

Effect of self-justification

(Tavris & Aronson)



People, particularly those who take risks, view themselves as good people who strive to do well. As a motivating force this gives strength to an organization but as a reductive belief, that is, a belief that reduces ideas to one narrow belief, the idea of one’s goodness can supersede other beliefs about oneself leading to an exclusion of one’s imperfections. The reductive belief that one is good and thus can only do good things interferes with self-criticism or criticism from others. It also interferes with interpretation of events regarding consequences on one’s actions. If we cannot do wrong then we attribute wrong to outside forces or other people.

In uncertainty or the unexpected we may do everything correctly yet fail. If we change our beliefs and actions, even though they were correct, we can work toward a solution and become reliable. We can adapt to changing events or changes in the environment. This is not a dualist approach where there is a right way or wrong way an approach that reflects the spectrum of less right to more wrong. One can also be right in multiple ways, particularly related to effectiveness at a specific time with specific resources. 

Keeping to the correct process but reaching the wrong or undesired outcome creates a dissonance in how we view ourselves. 
David Burns (1999) observed that when we succeed we have high self-esteem but this can come with a corollary – when we fail we are not worthwhile. Some respond to this failure not by letting their esteem drop but by justifying their actions – “mistakes were made (but not by me)” as Tavris and Aronson (2007) state in their book by that name. 

Justification of one’s actions, especially the wrong ones, comes from the drive to reduce the dissonance between one’s sense of self and one’s wrong decisions. Cognitive dissonance describes the tension between two conflicting cognition s (ideas, attitudes, beliefs, or opinions) that are psychologically inconsistent. To make sense out of these contradictory thoughts some will self-justify their actions and place responsibility on others or outside influences. 

Most people will justify a belief or action even when faced with proof that they are wrong. Not because they are bad people or that they lie or give excuses but because they see themselves as good and that their intentions are good. This self-justification convinces them that they did their best, it was the right thing to do, or they reached the best outcome possible. Self-justification can be more dangerous than the lie or excuse. At some level, people hold two beliefs, ideas, opinions, or values that psychologically conflict with each other. “I do not lie” and “I do not hurt people” will conflict. A young friend asks you about their new hairstyle with a haircut, which you think are awful. Cognitive dissonance causes the tension you feel at that moment.  Cognitive dissonance drives self-justification.

Self-justification drives one to justify a decision or action by adding benefits after the decision is made. The individual will also search for information that confirms the decision and ignore information that proves them wrong, confirmation bias.

Self-justification facilitates the dichotomy between “us” and “them” (us good guys vs. those bad guys) that disrupts information flow and eases blame towards the “others.” It produces a blind spot that one does not have prejudice only good judgment or bitter experience. If something goes wrong we do not hold ourselves responsible, we place responsibility on others. This fails as we can only have responsibility over our own behaviors or what we can control. 

Self-justification blocks the ability to learn from one’s own mistakes. It goes past, “I made the mistake, not you. My mistake was in trusting you, it is my fault,” to see how one’s own behavior contributed to the failure. Self-justification causes a blind spot at finding one’s own errors or those made by one’s respected colleagues.

Self-justification is the primary enemy of reliability. 

Tavris, Carol and Aronson, Elliot.  Mistakes Were Made (but not by me). Orlando, FL: Harcourt, Inc. 2007

Burns, David. 
The Feeling Good Handbook. New York, NY: Penguin Putnam, Inc 1999

Reductionism/The Logic of Failure

The actions that lead to failure appear logical at the time

Dietrich Dorner



Small mistakes, when they do not produce noticeable consequences, can accumulate into bad habits over time. Thus, when catastrophic failure occurs, it is less from a sudden turn of events than from unperceived forces that have chipped away the supports necessary for a favorable outcome. 

Failure can occur from the inability to envision consequences. Examples include using an abstract concept in a concrete manner, over-reliance on models, or maintaining a hypothesis by ignoring or discounting information contrary to the hypothesis can lead to failure.

People may reduce their hypothesis for problem solving into one element to make it easier to manage complex situations. This leads to the belief that if they can only correct that element or a specific person then things will become right. The reductive nature of this method, reducing things to the need for only one intervention, also interferes with identification of those elements necessary for a change in the system. As a result, the organization has incomplete knowledge of the system’s structure. 

We can acquire knowledge of structure by probing with actions and evaluating positive and negative feedback. Positive feedback occurs when a change in one element enhances the desired element. Negative feedback occurs when a change in one element diminishes the desired element. The system may cycle back and forth with negative feedback used to make mid-course corrections looking like error.  Is it really error or failure if we do not reach the desired goal? Confusing intention, what we would like, with motivation, what drives us, interferes with the actions necessary for success and clarification of our goals.

Dorner, Dietrich.  The Logic of Failure: Recognizing and Avoiding Error in Complex Situations. New York: Metropolitan Books, 1996. 

Deductive Reasoning

Deductive problem solving preferred to inductive problem solving

It is often more straightforward to teach using principles organized in a coherent manner that follow a logical pattern. When testing students the exam fairest to the student asks for answers in a structured manner which follows known principles. Rules structured around known principles reduce the fuzzy boundaries around a rule and limit the gray areas between the rules that are difficult to teach or test. One can be quite clear and organized when moving from any general principle to a specific situation. Deductive problem solving gives these benefits.

In life situations, where information constantly changes and windows of opportunity continually pass, time to grasp the principle at work may not exist. While focusing on the specific situation, recalling only what one can use from working memory, and using past experience, the individual brings some control to the situation and begins solving the problem. The individual learns what works through action. 

This approach, learning through action, appears disorganized as the individual moves from the specific situation to a general principle. However indirect or inefficient these actions appear, they are effective when dealing with uncertainty. These are the benefits of inductive problem solving. 

Viewing the situation as a puzzle vs. a mystery (
Dennis KowalWolfberg) A puzzle has a set number of pieces that fit together in one way. If we know the characteristics of the piece and the form of the puzzle we can assemble it given the necessary time. Looking at problems and situations as a puzzle will drive the person to search for the proper piece or to place pieces together with confidence that there is an endpoint.

Complex systems have numerous possibilities for transformation into the best answer. One may find redundant pieces (information) that wrongly increase their importance, missing pieces that may not be needed, or places for a non-existent puzzle piece. One may find people who are helping move toward a solution while others interfere, and those who appear to interfere may be helping.
 
The dynamic, complex problem acts like a mystery rather than a puzzle.

Untested Self-confidence

Some people demonstrate a strong self-confidence, almost to a melodramatic degree. When working with them we learn that they have never experienced the vulnerability of near death or colossal failure. It also sometimes appears as though the most tested or proven individual will appear to have the least self-confidence. Do not confuse this with timidity, as George Orwell wrote, “the only check on it is that sooner or later a false belief bumps up against reality, usually on the battlefield.” People who have felt their vulnerability in the immediacy of a threat may appear to have less self-confidence than those who have always prevailed yet they are more likely to be trustworthy. Those with the most outgoing sense of self-confidence may have trust from the public until their “false belief bumps up against reality.” 

Self-preservation

Watching out for others and protecting others improves situation awareness and collaboration. It appears to make sense to take care of one’s self first and, to an extent, it is important. But in the paradox of crisis the one who looks after him or herself at the expense of others tends to have the poorest outcome. You can watch out for your team’s safety easier than for you own. And they can see your safety easier than their own. In the paradox of danger you are better off watching out for the safety of others as they can watch your safety better than you can. 

Morality of Error

Error is preventable

How we view error can predict whether error is used to improve the system or suppressed where it will accumulate to contribute to catastrophic failure.  When error becomes a moral issue or an indicator of poor judgment people tend to go along with the group or hide their mistakes.

Confusing error with failure moves the reason we failed from multiple causes to a single point. Error can be identified, captured, and managed. Failure may occur because the circumstances prevented success from happening.

Error can be predictable, in which case it is an error of the system and the system would benefit from evaluation. Error can be unpredictable, in which case it serves as negative feedback providing information on what not to do.

Morality of Disagreement

Disagreement becomes a moral issue

Multiple points of view contribute to mindfulness. Peter Sarna, Oakland (CA) Police Department, stressed that in a complex situation the organization should have the requisite variety of staff to bring expertise from various and appropriate disciplines into the decision process. When this occurs, members must have the ability to disagree and present other points of view. Disagreement as a moral issue or from the melodrama of the crisis (“Don’t you know this is an emergency?”) obstructs the flow of information. Bad decisions do not come from a lack of professionalism as much as from bad information. 

Centralized Authority

Central decision making

Culturally, it makes sense for the one in charge or with authority to make the decisions. When information is moved to a central decision maker information is lost. When decisions are moved to the actors the decision is blunted. The greater distance between decisions and results dampens the feedback loops necessary for maneuverability and resilience. 

Decision making has an ecological component in dynamic and complex states. The closer to the point of action the decision maker works, the more likely one will have current information and experience rapid feedback loops. Also, the closer decision making occurs to the point of effecting action and knowledge of resources, the more effective the actions will be when they are carried out. While it appears paradoxical, to make decisions cebtrally with inadequate information in dynamic, complex states is to court mistakes.

Diffusion of Responsibility

The stubborn problem: whose responsibility is it?

Frustration easily sets in with the stubborn problem. This can lead to a sense that it cannot be solved and one may learn to be helpless. This learned helplessness, identified by Seligman, can lead to attrition of good people in the highly reliable organization.

Happiness

Happiness or Pleasure Supersede Satisfaction

Some members, particularly novices, confuse happiness or pleasure with satisfaction or place satisfaction as a subsidiary goal. Happiness and pleasure are passive and occur because something happened to us. Satisfaction is from active processes such as problem solving or adapting to something new. Satisfaction is more likely to result in productive behavior.

Boxed Thinking

Use of Slang, Jargon, and Clichés

Slang is the informal words people use that are not standard, slang will change with time more rapidly than conventional language. Jargon is slang peculiar to a social or professional group. Both slang and jargon work as a shorthand to quickly and, often colorfully, express ideas. Clichés communicate quickly but their overuse lends them easily to improper application. Slang, jargon, and clichés have attached cultural and personal significance that the person using them may not be aware of.

Used out of context slang, jargon, and clichés stop thinking and impede information flow.

Expectations

Expectations derive from the interests or desires of self or others. Because they are not an integral part of the problem they interfere with problem solving. When people place expectations over the demands of the system or problem they begin working with extraneous material while the problem can escalate.

Conspiracy of Inaction

The success of failure

When a system fails but it is not recognized or wrong approaches are used without consequence, people attribute their success to the related actions. Over time this deviation becomes accepted and is the normal operating procedure.

Strategy vs. Operations

Safety: Strategic vs. Practical

Roger Resar, M.D., Senior Fellow, Institute for Healthcare Improvement


In discussing whether safety is strategic where defenses are planned and made compulsory in preparation for event prevention or tactical where the organizational response is devised and created during the response to a threat to prevent an event, I have become convinced with the work I have done on risk resilience in the last couple of years that the answer to safety is multi-pronged. While the approach of using checklists or other pre-designed defenses is not wrong it also is not a total solution. If one looks at safety as purely linear then all adverse events can be traced back to a defect and if that defect is fixed all future events of that nature can be prevented. 
Dietrich Dorner in his book The Logic of Failure calls this the reductive hypothesis. I discuss it on the website under “Obstacles.” The problem with that the reductionist approach is that it will work sometime, but for many threats and subsequent events this is just not the case because the elements leading to many adverse events will never align themselves in the same manner again. 

For that reason many of our root cause analysis (RCA) evaluations are almost useless in their capacity to prevent future adverse events leading to resource waste and, even worse, by increasing the rigidity of the organization with rarely used rules, policies, regulations, checklists and other pre-designed defenses decreases the overall safety of the organization by failing to recognize the importance of flexibility of response to threats. in  I think the answer is a balance of "both". Strategic (pre-designed defenses) and tactical (ability to respond to threats) The idea of having certain issues that are so egregious and important subject to various rules and policies is very credible and important. For the most part however I believe we also need to teach organizations how to be aware of the hazards in the field of medicine. Those hazards need to be formulated in such a way that states "what do I not want to happen?" and train and compliment staff on the ability to prevent escalation by recognizing hazard and the development of hazard. 

It is only through the constant tending to "mindfulness" that we will move to the point where safety depends on the combination of certain rules, policies, and regulations that have to be followed along with the constant understanding of what the hazards we need to be aware of. That awareness will prevent minor events from escalation to major events or to prevent those events we could not have possibly predicted with a rule or policy.

The key I believe to the safety question is to look at the problem as both linear (for some issues) and non-linear for many others. The same solution will not work for both.

Governance

Group Sense Making and Tradition

Marc Flitter, van Stralen, Kelly


Analysis of two instances of medical staff elimination of established high reliability, (HR) programs at separate hospitals suggests that prominent features of current medical staff governance codify and reinforce physician sense making that is resistant to the adoption of HR processes. 

Physician sense making profiles of the two medical staffs were developed by initially identifying the HR theoretical tenets driving the processes of each program, one a HR pediatric intensive care unit, and the second a peer review committee that replaced standard of care determinations with cause analysis. The requisite modifications of physician work flow, (PW) required to implement these processes, delineated through a comparison with the PW employed in the prevalent hierarchical model of patient care, provided distinct components of HR-PW requisite change. The elaboration of the behavioral and cognitive characteristics of these HR-PW components within the context of the seven properties of sense making resulted in a hypothetical sense making profile, (SP) consistent with the observed actions of the two medical staffs. This SP appeared validated by several recent published studies of physician sense making within hospital settings.

Traditional medical staff bylaws were reviewed in an effort to identify to what extent aspects of the SP including physician autonomy and the components of classic peer review that reinforce a provider centric analysis of adverse events are addressed. In addition to the CMS interpretation of guidelines that specifies that a hospital’s medical staff shall be responsible for the quality of patient care, the dependence by credentials committees on peer review grading of physicians constitute two specific characteristics of hospital governance seen to perpetuate a physician SP profile that is resistant to the adoption and implementation of HR processes.         

The necessary components for achieving high reliability in a health care organization have been identified as robust processes of improvement, a pervasive culture of safety, and collective mindfulness. We believe that the physician SP that describes the two medical staffs that rejected high reliability to the extreme offers insight into the specifics of physician resistance to change when confronted with the challenges of implementing a culture of safety and participation in collective mindfulness. Paradoxically it is these same specifics, that when seen in relationship to the seven properties of sense making, that offer strategies for overcoming what has proven to date to be a pervasive inertia of medical staffs. Beginning with necessary changes in medical staff governance that currently institutionalize physician SM to the detriment of quality and safety, the manner in which physicians perceive the ongoing flow of events, the nature of cues which they extract, the characteristics of their initial responses which effect the plausible interpretations of their perceptions , and the manner in which their identities, sense of self- esteem ,and standing among their colleagues are reinforced can become drivers for achieving high reliability. 

Share by: