FAQs

FAQ's for HRO

1. What is the definition of a High Reliability Organization?


Based on previous research by Perrow (1984), Weick and Roberts (1993), and other work by Dr. Karleen Roberts in the 1990s, HROs can be defined as organizations which have fewer than normal accidents. This decrease in accidents occurs through change in culture. Technology has some influence but not in isolation, nor without a change in the organization's culture. Roberts and her colleagues have defined the characteristics of organizations that have fewer failures than expected and their implications for organizational design.

In 1984, Perrow investigated "normal accidents." He concluded that while all organizations would eventually have accidents because of their complexity and interdependence, some organizations were remarkably adept at avoiding them. The question that Roberts sought to answer in her stream of research is why do some organizations not have as many failures as others?

From this question grew the definition and characteristics of HROs. At this point in its development, the research has identified some key characteristics of HROs. These include organizational factors (i.e., rewards and systems that recognize costs of failures and benefits of reliability), managerial factors (i.e., communicate the big picture), and adaptive factors (i.e., become a learning organization) (Grabrowski & Roberts, 2000). More specifically, HROs actively seek to know what they don't know, design systems to make available all knowledge that relates to a problem to everyone in the organization, learn in a quick and efficient manner, aggressively avoid organizational hubris, train organizational staff to recognize and respond to system abnormalities, empower staff to act, and design redundant systems to catch problems early (Roberts and Bea, 2001).

In other words, an HRO expects its organization and its sub-systems will fail and works very hard to avoid failure while preparing for the inevitable so that they can minimize the impact of failure.

2. Is HRO a new thing?


The roots of HRO theory were built in a stream of theoretical advances by Karlene H. Roberts(UC Berkeley) (read her article HRO Has A Prominent History), Herbert Simon, James March, and Karl Weick (University of Michigan) -- who shifted attention away from organizations as rational machines to organizations as arenas in which complex organizational processes occur. The Cuban Missile Crisis was analyzed through this emerging lens by Graham Allison in his 1971 book, Essence of Decision: Explaining the Cuban Missile Crisis.

Other important HRO roots -- because they explore the phenomenon of deviation-amplifying loops -- include Cohen, March, and Olson's study of garbage-can decision-making processes, Barry Turner's USS Carl Vinson 2005work on catastrophes, and Barry Staw, Lance Sandelands, and Jane Dutton's research on "threat-rigidity cycles."

An initial conference at the University of Texas in April 1987 brought researchers together to focus attention on HROs. Researchers at the University of California, Berkeley, the University of Michigan, the George Washington University and many other universities around the world began to look at organizations in high-risk industries.
At Berkeley, initial research on HROs was done within the Federal Aviation Administration’s Air Traffic Control Center, a commercial nuclear power plant, and naval aircraft carriers.

3. HRO principles instead of step-by-step processes?


Researchers have found that successful organizations in high-risk industries continually reinvent themselves. For example, when an incident command team realizes what they thought was a garage fire has now changed into a hazardous material incident, they completely restructure their response organization. HRO teams are comfortable and adept at quickly building creative responses to failure. Failure happens, and HRO teams lean on their training, experience and imagination as a reliable means to recover from failure.

There are 5 principles of High Reliability Organizing that have been identified by Drs. Weick & Sutcliffe as responsible for the "mindfulness" that keeps them working well when facing unexpected situations.

 
 Preoccupation with failure
   Reluctance to simplify interpretations
   Sensitivity to operations
   Commitment to resilience
   Deference to expertise


Reference: Managing the Unexpected: Resilient Performance in an Age of Uncertainty 2007, Weik & Sutcliffe

4. What is a practical HRO industry example?


Practitioners in High Reliability Organizing (HRO) work in recognized high risk occupations and Wildfire Operations environments. Wildfires create complex and very dynamic mega-crisis situations across the globe every year. U.S. wildland firefighters, often organized using the Incident Command System into flexible inter-agency incident management teams, are not only called upon to "bring order to chaos" in today's huge mega-fires, they also are requested on "all-hazard events" like hurricanes, floods and earthquakes. The U.S. Wildland Fire Lessons Learned Center has been providing education and training to the wildland fire community on High Reliability Organizing since 2002. Firefighters have learned that existing HRO behaviors can be recognized and further developed into high-functioning skills of anticipation and resilience. Learning organizations that strive for high performance in things they can plan for, can become highly reliable organizations that are better able to manage events they totally unexpected, and by definition could never plan for.

The wildland fire community has been working with Karl Weick since the mid-1990s, and more recently with his co-author Katleen Sutcliffe, in several national Managing The Unexpected Workshops sponsored by the Lessons Learned Center.

5. Does HRO ensure safety?


No, this is not another Zero Defect Program. The best HROs know that people will eventually Maratime Accident make mistakes and their systems can become vulnerable. HRO Principles help to build, thrive in and sustain healthy safety cultures.

There are those who would argue that tight control and micro-managing with a more authoritarian approach will keep decisions in the hands of experts. In this manner they would ensure safety. There is research that supports those who argue for decision migration, safety culture, and cooperative approaches to develop a high reliability organization.

As technology becomes more complex and pervasive, we can now address uncommon situations in a productive manner. However the same technology can itself cause uncommon, catastrophic situations such as Exxon Valdez, the Three Mile Island Nuclear Power Plant incident, and the Challenger Shuttle tragedy. There are evidence-based processes that one can use before, during and after an evolving disaster that fill many of the gaps where policies, rules and other incomplete barriers to error never imagined a vulnerability could develop.

Share by: