Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
44 views8 pages

04 Curriculum - Transcript - Mod2 - Bonacum

Uploaded by

whysoquality
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
44 views8 pages

04 Curriculum - Transcript - Mod2 - Bonacum

Uploaded by

whysoquality
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

National Patient Safety Foundation

PATIENT SAFETY CURRICULUM


Module 2: Advancing Patient Safety through Systems Thinking and Design
Doug Bonacum, MBA, BS
Vice President, Safety Management, Kaiser Permanente

Transcript of recorded lecture © National Patient Safety Foundation, Boston, MA

Hello. Welcome to Module 2: Advancing Patient Safety through Systems Thinking and Design. My name
is Doug Bonacum. I’m the Vice President for Safety Management at Kaiser Permanente’s program office
in Oakland, California. Prior to joining Kaiser Permanente in 1994, I worked for several years in a
manufacturing environment, and prior to that was a submariner in the US Navy. Systems thinking and
design is an integral component of high-reliability organizations, such as our nuclear powered submarine
fleet, and their counterparts above the water, like aircraft carriers. Looking back on my experience, I think
it was the integrity and robustness of our systems that enabled them to encourage our people to be self-
conscious about their beliefs and their behaviors, to question what was going on in the moment, and to
have greater foresight of what was to come. This latter state of being is sometimes called “mindfulness,”
and when coupled with high-performing systems enables organizations to operate in highly complex
environments for prolonged periods of time without failure. That’s exactly what we owe our patients who
trust us with their lives, and that’s exactly what we owe our practitioners who come to work every day to
help patients heal, and above all else, to do no harm. This particular module examines contemporary
approaches to patient safety through an exploration of systems thinking, error theory, and reliable design.

You will be introduced to James Reason’s Swiss Cheese Model of organizational accidents, which has
been recognized as the defining model for describing accident causation, and will learn strategies for
applying this model to identify the underlying causes of medical errors. With a focus on reliability and the
principles of reliable design, we will also examine the discipline of human factors engineering – and
evaluate performance improvement models such as Lean, Six Sigma, and the model for improvement.
And, finally, we will take a look at how application of these methods can lead to improved workflow
processes, the reliable delivery of evidence-based care, and a reduction in preventable error.

Before we begin, let’s review the learning objectives for this module. When we are done, you should be
able to: One, outline major principles for system thinking and reliable design; two, describe human error
categories and explain ways in which human factors engineering impacts safe process design; and
three, differentiate improvement models that have been utilized outside and inside of the health care
industry. To start, it may be helpful to ground ourselves in a couple of quick case examples to help
underscore the complexity of our operations and begin to think about what systems-related issues
contribute to patient harm.

In the first case, a 35-year-old male was seen for his first visit with a new provider and to seek a referral to
ophthalmology. At the end of the visit, the provider was reviewing lab results for a different patient, which
showed positive fecal blood and anemia, suggestive of an ulcer or cancer, and he mistakenly entered a
GI referral for this patient to the lab for a colonoscopy. The referral stated that the patient had a cult-
positive stool and mild anemia. The patient took off work and prepared himself for the procedure. The day
after his colonoscopy, the patient called the provider who made the referral and asked why he was tested
for blood in his stool and if he really needed the procedure in the first place. It was at this time that it was
identified that the patient had undergone an unnecessary procedure, and that it had been ordered on the
wrong patient.

1 - NPSF Patient Safety Curriculum. Module 2: Advancing Patient Safety through Systems Thinking and Design.
In the second case, a 70-year-old female underwent an outpatient surgery for stress incontinence, failing
bladder, and a prolapsed rectum. Vaginal packing was placed during surgery, and accurately accounted
for in the operating room. The patient went to the recovery area, and was to be discharged directly home
from there. Her doctor wrote an order to remove the vaginal packing prior to her discharge, but he did not
communicate that to the patient or her daughter who was waiting. The nurse assigned to the patient was
a male. He was working with a female nurse, who offered to remove the patient’s full catheter and vaginal
packing. She removed the foley, but got interrupted, and forgot to remove the packing. The male nurse
assigned to the patient assumed that his colleague had removed the packing and did not double check
this. The patient followed up with her primary care provider seven days later because of vaginal pain and
gauze protruding from her vagina.

And, finally, in the third case, at a midsize hospital with a nearby primary care office, a 52-year-old with a
history of GI bleeds and ulcers, as a result of taking non-steroidal anti-inflammatory drugs, or NSAIDs,
was mistakenly prescribed an NSAID, in this case, ibuprofen, developed a GI bleed, and had to be
admitted to the hospital.

All three of these cases, although quite different, have a lot in common. They involve patients and family
members who trusted that the health care delivery system would perform flawlessly. They involve
practitioners with the best of intentions and a commitment to do no harm, and all can and do occur far
more frequently than anyone in health care cares to admit. To change the outcome of such systems
requires us to think differently about the causes; arguably all of these cases could have been prevented
with a more diligent practitioner who was preoccupied with failure, was reluctant to assume or simplify,
and was sensitive to the operations and context around them. This mindset has taken commercial
aviation, our nuclear powered navy, and other high-reliability organizations quite far in terms of safety, but
their performance is as strongly tied to systems design as it is to managing the unexpected. If we are to
dramatically move the dial on patient safety and health care, we have to improve the foundation upon
which mindful practices can thrive. We have to advance patient safety through systems thinking and
design. As a patient safety leader, systems thinking and design is an essential skillset in your
performance improvement tool box.

To help us all get on the same page with some of the terms that I have been using, let’s define a system
as a composite of people, procedures, and equipment that are integrated to perform a specific operational
task or function. Components in a system are interconnected in such a manner as to perform their task or
function when input is provided from a source such as another component or human operator. We might
think of the reliability of such a system as its ability to perform its function over time under commonly
occurring conditions. The reliability of any system can be degraded by hazards.

Hazards are simply activities or conditions in systems that pose threat of harm. Preventable injuries to
patients typically happen when a hazardous condition inherent in the system is activated, or an unsafe act
is committed. The condition or unsafe act may set off its own sequence of untoward or undesirable
events, or it may directly cause the injury itself. In the aftermath of a preventable adverse outcome, safety
leaders focus on the set of events that occurred and led to the harm, looking to identify and eliminate the
hazard or hazards from recurring. Of course, there are very sound moral, ethical, reputation, legal, and
even financial reasons for minimizing the risk of adverse outcomes before they occur. In addition, there
are principle ones as well. All of us and our loved ones have already been or will be recipients of medical
care. While it may be unfair for us to expect a health care system to be 100 percent safe, that is
completely free of all hazards and potential harm, we should expect safety to be an inherent characteristic
of our systems that allows them to function as intended under reasonable anticipated conditions with an
acceptable minimal level of risk.

Systems safety is a formal discipline approach to hazard identification, evaluation, and control. While
most patient safety leaders won’t be systems safety engineers, they can and must learn how to apply a
variety of technical, social, and managerial methods to identify, analyze, and control hazards both before
and after adverse outcomes. Now that we have a shared understanding for what we mean by “systems”
in the context of this coursework, as well as a notion of hazards as an activity or condition that poses a
threat of harm, it is useful to dive deeper into what we mean by “error” and see how human error in
medicine is often related to the system our practitioners operate within.

2 - NPSF Patient Safety Curriculum. Module 2: Advancing Patient Safety through Systems Thinking and Design.
As noted in Module 1 of this curriculum, in 1999 the Institute of Medicine, or IOM, reported that between
44,000 and 98,000 patients went into US hospitals each year and they didn’t come out alive, and it wasn’t
because they were sick. Their death was a result of medical error. While this report was initially met with
some skepticism, it is now widely recognized and accepted. Unfortunately, there was plenty of evidence
of this epidemic before the IOM report, and there has been plenty since, and yet the industry has been
very slow to make dramatic improvements in patient safety.

With an industry filled with professionals drawn to helping others who go through intense medical training
and are carefully screened for their positions, why is health care so dangerous? The first thing we must
appreciate is how complex the practice of medicine has become. Even with all the medical research that
has been done over the past 100 years, there is still not a high degree of agreement on what constitutes
best practice. Diagnosis and treatment are often performed under some degree of uncertainty, and
medication monitoring, particularly in the outpatient setting, is quite challenging. For our frontline
practitioner, there are always new medications, new technologies, new procedures, and new research
findings to assimilate. Patients are becoming increasingly complex, and the diversity of the workforce
grows at an increasing rate. Providing safe, reliable care has never been more challenging than today.

One thing that is clear to all of us working on this issue today is that we can’t solve our patient safety
problems by using the same kind of thinking that created them in the first place. To make things right for
every patient every day will require a new way of thinking about error in medicine and a new approach to
preventing harm. This new way begins with a deeper appreciation for error causation and prevention.
Most preventable harm to patients receiving health care today is caused by unsafe acts of the very
practitioners who are trying to help them.

Unsafe acts may be categorized as either human error, such as slips, lapses, and mistakes, or procedural
violations, such as the slow and insidious drift from safe practice over time or a blatant disregard for
important safety rules. Of note, the very same set of mental processes that leads us to make
mistakes outside the workplace, often with little or no consequence to anyone or anything, can be deadly
in health care. For that reason, it’s critical not only to become familiar with the terms describing human
error, but more importantly, to understand and control its sources.

As described by James Reason in his book entitled Human Error, errors occur because of one of two
main types of failures – either actions do not go as intended, or the intended action is the wrong one. The
former situation is a so-called error of execution, and could be fully described as either being a slip if the
action is observable, or a lapse if it is not. An example of a slip is the inadvertent selection of the wrong
medication from a dropdown menu in an automated medical record. An example of a lapse is forgetting to
implement a physician order in the desired time window. A mistake, on the other hand, is a failure of
planning, i.e., the plan is wrong. This can be either rule-based because the wrong rule is applied, or
knowledge-based because the clinician does not take the correct course of action. An example of a rule-
based mistake might be getting a diagnosis wrong and so embarking on an inappropriate treatment plan,
based upon a set of rules on how to proceed. Knowledge-based mistakes tend to occur when clinicians
are confronted with what for them might be a new clinical situation. Slips, lapses, and mistakes are all
serious and could potentially harm patients, though again it all depends on the context in which the error
occurs.

James Reason is also clear to point out that the term “error” does not capture all of the ways in which
human beings contribute to major accidents. If to err is human, then one may argue, so too is to violate.
Just observe your own driving habits and those of the other cars on the road, and this notion becomes
very clear. Driving above the speed limit, talking or texting on a cell phone, rolling through stop signs or
lights on red, and tailgating are all violations, and all are as alarmingly clear and present as the day. And
Reason notes that while errors may be defined in relation to the cognitive processes of the individual, and
violations or deliberate deviations from safe practice may be defined with regard to a social context and
behavior that is governed by things like operating procedures, codes of conduct, values, rules, norms,
and the like. Both errors and violations can be and often are present in the same action sequence. I
conceptualize this as the perfect storm of preventable patient harm. Or, if you were still stuck on my
driving example, the perfect storm of automobile accidents.

3 - NPSF Patient Safety Curriculum. Module 2: Advancing Patient Safety through Systems Thinking and Design.
The major contribution that Reason has given to patient safety, however, is not his categorization of error
types, nor his calling us to study systems in adverse outcomes through an integrated framework of both
cognitive and social psychology, but rather his compelling framework for accident causation in complex
systems, known as the Swiss Cheese Model, and his conceptualization of the three principle stages
involved in development of an organizational accident – that is, organizational factors, local workplace
factors, and finally unsafe acts.

First, we’ll reflect on accident causation and complex systems. Let’s consider the delivery of medicine to
be the inherent hazard of our industry. Ingesting a medication, having surgery, being placed on bypass,
undergoing dialysis, or receiving radiation, for example, all come with significant health benefits, but they
are not risk free. In addition to the hazards inherent to medicine, accident analysis has revealed the
human contribution to adverse outcomes predominate. So, to protect the patient from both the intrinsic
hazard of receiving medical care and the human condition, which includes our propensity to err and to
violate, safeguards must be established to help minimize the risk of preventable and foreseeable injury.
Each of the slices of Swiss cheese in this model might be thought of as embodying those safeguards.
Taking a step back, Reason asserts that the setup for an accident to occur in a system begins with fallible
decisions made by its top-level leaders. These decisions are then transmitted via line management, and
then ultimately to the point of production, or the “point of care” in our industry, where so-called
preconditions or qualities of human behavior and production co-exist, including attributes like level of skill
and knowledge of the workforce, work schedules, technology, equipment, and maintenance programs,
along with the individual and collective attitudes and motivation of the workforce which create its culture.

Ultimately, the commission of unsafe acts, which includes human error, violations, and most often both,
can be prevented or mitigated with a variety of defenses, but often, these defenses have defects in them,
too. In summary, Reason conceptualized the trajectory of accident opportunity being one which begins
with what he calls latent failures at managerial levels, proceeds with complex interactions as the impact of
management’s decisions get closer and closer to the point of care, and is neither stopped nor mitigated
by one or more levels of defense that were designed to reduce the risk of harm associated with unsafe
acts in the first place. This is the Swiss Cheese Model of accident causation. The Swiss cheese analogy
here is that all of the holes or defects in the various levels of the system align to turn a managerial
deficiency into an adverse outcome, and the challenge for safety professionals is that because that
alignment occurs so infrequently, it’s difficult to get the organization to attend to the risks -- those that are
active and present every day, such as not washing one’s hands, and those that are latent or lie dormant
in the system, such as the choice of where to place hand hygiene dispensers in the first place.

In addition to Reason’s contribution to safety science with the Swiss cheese model of accident causation,
he used a simple triangular shape to present the principal stages involved in the development of an
organizational accident. This model has three levels or sections. At the top of the triangle is the person or
unsafe act and what he called the sharp end of care – it’s where the rubber meets the road. The local
workplace or error- and violation-producing conditions that impact the worker’s behaviors are in the
middle of the model, and the organization itself is represented as the base or foundation upon which all of
this lays. It is sometimes called the blunt end. This simple structure links the various contributing elements
of an adverse outcome into a coherent sequence that runs bottom up in causation and top down in
investigation.

Other modules in this program will talk more about other hazard and incident investigation methods, such
as failure modes and effect analysis, or FMEA, and root cause analysis, or RCA, as well as review how
you might better leverage culture, teamwork, and communications to improve performance. The
remainder of this module will examine the discipline of human factors engineering and evaluate
performance improvement models, such as the Model for Improvement, LEAN, and Six Sigma, for the
purposes of applying these principles to improving the safety and reliability of care.

In his landmark article “Error in medicine,” published in JAMA in 1994, Dr. Lucian Leape concluded that
errors were common, their causes were relatively well known, and while many errors were caused by
areas that relied on weak aspects of cognition, system failures were at the root of most medical errors.
The way that I’ve interpreted this statement is that errors are events whose causes can be identified and

4 - NPSF Patient Safety Curriculum. Module 2: Advancing Patient Safety through Systems Thinking and Design.
mitigated. That means when you review a close call or a preventable adverse outcome and you conclude
that the cause of the event was human error, you have failed the patient and family as much as the
individual involved in the event. The error wasn’t the cause. The error was an event whose causes can be
identified and mitigated. Furthermore, once an error occurs and it reaches the patient, reliable systems
have methods in place to detect and further mitigate them before they cause significant harm. That
means our plan for building reliable systems includes a plan for handling how they would predictably and
reasonably fail and what our systems are going to do in response.

A cornerstone of building safer, more reliable systems is embedded in an understanding of human factors
engineering. Human factors engineering focuses on human beings and their interaction with each other,
products, equipment, procedures, and the environment. It leverages what we know about human
behavior, ability, limitations, and other characteristics to ensure safer more reliable outcomes. So what do
we know about human behavior, ability, and limitations that we must account for to make our systems
safer and more reliable?

Well, we know that the setup of inadequate sleep is a major factor in human error. For example,
employees are more likely to commit human error between 12:00 AM and 6:00 AM even when they have
slept during the day, and studies have shown that 24 hours of sleep deprivation impacts human
performance as much as a blood alcohol level of 0.1 might. So you can deliver a baby, make a difficult
diagnosis, or perform a surgical procedure after being up for 24 hours straight, but if you were driving a
car with a blood alcohol level of 0.1, you’d be legally drunk. We know that while stress may improve our
performance in some areas, it degrades it in others. Studies have shown that stress usually has the
largest impact on knowledge workers, which would, of course, be our health care workers. And health
care workers today are seemingly under a lot of stress.

We know that in well-practiced, highly automatic tasks, unexpected interruptions are frequently
associated with omission errors, either because some unrelated action is unconsciously counted in as
part of a task sequence, or because the interruption causes the individual to lose her place on the
resumption of her task. And, as you know from your own personal experience, errors of omission such as
forgetting your grocery list when you go to the store are much harder to detect and correct until it’s too
late. We know that we often remind ourselves or tell our staff to pay more attention, to slow down and be
more vigilant, but we also know that research on vigilance or the ability to sustain attention and react to
stimuli over time has shown that performance declines within only a short time after monitoring begins.
And, oh, by the way, we’ve known that since the 1940s.

And, finally, we’ve known for decades that memory is a very weak aspect of human cognition, and in
particular working or short-term memory is extremely limited in its capacity. And so problem solvers are
prone to lose items in what’s called their stack. Cognitive psychologists have shown that the number of
discrete items of information human beings can hold in short-term memory is about seven, plus or minus
two. Humans compensate for the limited size of their working set by selective ignorance and what’s called
chunking – that may be efficient and may help us get to the top of the food chain, but it’s clearly not safe.
And yet there’s very little a human can do about it. It requires a system.

As a patient safety leader, you need to recognize what human factors engineers know. We can’t change
the human condition, but we can change the conditions under which humans operate. The good news is
that you don’t have to be a human factors engineer to help create a safer and more reliable system. You
do, however, have to have an appreciation of the following error-reducing principles that are specifically
designed to address human factors challenges, and some familiarity with integrating them into care
process improvement models, such as the Toyota production system or LEAN methodology, the Model
for Improvement, and Six Sigma.

Let’s talk first about the error-reduction principles that patient safety leaders must become comfortable
with. This list of 11 is not meant to be all inclusive, but it’s clearly a good start. Number one, simplification.
This is simplifying the structure of tasks to help minimize the load on global cognitive processes like
working memory and to reduce waste. Number two, standardization. This is about creating standard work
equipment and materials to better assess and improve outcomes, orient and train new staff, and improve
ease of use. Standardization also requires us to clarify roles and responsibilities regarding both routine

5 - NPSF Patient Safety Curriculum. Module 2: Advancing Patient Safety through Systems Thinking and Design.
and infrequent tasks so that everyone is absolutely clear about who is accountable for what and how
where and when we expect those accountabilities to occur. If it’s good for nuclear power plant operations,
commercial aviation, and other high-reliability organizations, it’s good for us.

Number three, avoiding reliance on memory to minimize the risk of slips and lapses that we spoke about
earlier in this module. Number four, improving access to information and intelligent decision support to
improve efficiencies, reduce reliance on memory, and minimize mistakes by improving decision making.
Number five, taking advantage of habits and patterns to make it easy to do the right thing and to minimize
the risk associated with shortcuts, workarounds, and drift from safe practice. Number six, exploiting the
power of constraints and forcing functions to make it hard to do the wrong thing.

Number seven, using visual controls to shape desired behavior. Number eight, promoting effective team
functioning through enhanced strategies, like simulation and development of communication skills to
better leverage existing resources, minimize errors, and recover from harm. High-performing teams also
hold each other accountable and are more capable in confronting each other’s violations and drift from
safe practice in the interest of patient safety. Number nine, deploying redundancies where appropriate, to
capture and mitigate an error before it has the chance to do harm. Number ten, eliminating environmental
factors that degrade performance, such as clutter, inadequate lighting, excessive noise, and needless
interruption and distractions. And finally, number eleven, creating systems that are better able to tolerate
the occurrence of errors and contain their damaging effects when they do occur – this includes actions
and outcomes of practitioners more visible through strategies like redundancies and using substitution
and other mitigating strategies to lessen the impact of error.

When each practitioner takes personal accountability for what we called in the Navy “their own fitness for
duty” or their mental emotional and physical readiness to do their job, and their organization shifts the
focus from blaming individuals for past errors to preventing future errors, systems can be made safer and
better patient outcomes will follow. This does not mean that individuals can be careless. People must still
be mindful and held responsible for the choices they make. But when an error occurs, blaming an
individual does little to make the system safer and prevent someone else from committing the same error
again.

So how do we make systems safer? Well, I believe it’s a combination of three things – the first is
recruiting, developing, and maintaining the best and brightest providers and creating a learning system
that continually sharpens their technical and cognitive skills. The second is through creating a culture of
safety that includes placing a premium on teamwork, communication, and adherence to safe practice,
and the third is through the design and sustainment of standard work and reliable work flows. Any
organization that leads with a core value of patient- and family-centered care and has these three
elements in motion is an organization that is making systems safer. Our focus in this module continues to
be on the element of design as a statement of standard work and reliable work flows. We will focus the
remaining time together on the importance of integrating the error-reducing principles we just reviewed,
with several care process improvement models.

Armed with an understanding of human error and deviation from safe practice, theories of accident
causation, management accountability, and error-reduction principles, you now better appreciate the
challenge of measurably improving the safety and reliability of an existing system. Taking a system as is
and making it better requires, interestingly enough, a systemic approach, and as we began this module
with the notion of systems that is where we will end. While there are many ways to structure the
improvement process, the final portion of this module will review three systemic approaches. The first is
the Model for Improvement. The second is the LEAN methodology. The third is Six Sigma.

The Model for Improvement was developed by associates in process improvement and is recommended
by the Institute for Healthcare Improvement, or IHI. It is a straightforward and effective tool for celebrating
change and consists of two main components. The first component is framed by asking three simple
questions: What are we trying to accomplish? How will we know a change is an improvement? And what
changes can we make that will result in an improvement? Another way of saying this is: What is our aim,
what will we measure, and what will we do to improve performance?

6 - NPSF Patient Safety Curriculum. Module 2: Advancing Patient Safety through Systems Thinking and Design.
The second component for the model of improvement is integration of the plan-do-study-act cycle to test
and implement the identified changes. It is utilization of the PDSA cycle that guides the change process
and integrates the measurement piece to know whether or not our changes are an improvement. Now,
linking this model back to what we previously talked about in this module, the changes that are most likely
to lead to an improvement are those that are tightly coupled to the causes of the current state of
performance, and those that are strong by their very nature. The strongest changes illuminate the
identified hazards completely. Moderate changes control the hazard in some way, making it easy to do
the right thing and hard to do the wrong thing. And the weakest changes rely on things like education,
vigilance, the intent of the workforce, or changes in policy without redesigning the workflow.

Changes can be tested on a small scale, using the PDSA cycle. The notion here is that you spend less
time meeting and talking about what will work or not and more time experimenting, evaluating, and
learning what does work. When you’ve done a good job identifying the underlying causes of a problem,
and you link strong corrective action directly to those causes, so-called small tests inherent to PDSA
cycles will allow you to see what will work or not in operation. If this process is used after a preventable
adverse outcome, the measures you develop must give management a good sense that the risk of
repeating a similar type of event has been significantly reduced. If your measurement can’t do that,
management should be reluctant to sign off on the corrective action plan. After testing a change on a
small scale, and refining it through several PDSA cycles, the team can implement the change on a
broader scale, including spreading it to other areas where applicable.

The second model we will look at is the Toyota Production System or LEAN management method, which
provides a range of techniques that create a more efficient workplace. The intent is to create smooth
patient and workflow and eliminate waste in time, effort, or resources to provide services the customer
finds of value. In addition, the LEAN management method values continuous improvement in operational
processes, and promotes a productive culture across an enterprise. I believe the LEAN methodology can
be an effective approach for advancing patient safety through process design because it is grounded in
standard work, which is a cornerstone of all high-reliability organizations. Mitigation, which recognizes
errors, will continue to occur, but harm can be dramatically reduced nevertheless, and continuous
improvement, which is aligned with any organization whose core value includes patient- and family-
centered care, that is, a commitment to the notion that we can always do better for our patients and their
families.

For organizations such as mine that are not committed to any one particular model, we find that the LEAN
method is particularly effective when a system lacks standardization and there are obvious opportunities
to reduce waste. Like the Model for Improvement, LEAN utilizes quick, iterative tests of change or
experiments that can allow health care providers to improve processes, eliminate waste, reduce
ambiguity, and solve problems in a relatively rapid fashion. When these are the underlying causes of
human error, LEAN offers a useful approach to improving performance by addressing the activities or
conditions that pose a threat of harm. Another module in this coursework will describe in detail how the
LEAN management method has transformed the Virginia Mason Healthcare Delivery System in Seattle,
Washington.

The third and final structured approach to improving performance discussed in this module is Six Sigma.
Six Sigma is a multifaceted performance improvement strategy that focuses on making every step in a
process as reliable as it can be by identifying and reducing defects. You might think of a defect as
anything that leads to customer dissatisfaction. To eliminate defects, staff follow a particular sequence of
steps to achieve and measure a desired result. This sequence is called DMAIC, which stands for Define,
Measure, Analyze, Improve, and Control. Due to the complexity of this process, the effort is often led by
an expert or “black belt” who is trained and certified in Six Sigma’s operational excellence ideas. From a
patient safety perspective, Six Sigma may be most helpful when high levels of reliability are critical to
patient safety and the margin for error is small. One of the things that I find very helpful about Six Sigma
is its emphasis on defining the problem. All too often in health care, our workforce is off and running
solving a problem it really doesn’t understand. Patient safety is complex and problem definition is key.

While each of the aforementioned models has its differences, what you should remember is that safe,
reliable systems typically share the following core attributes: well-defined workflows with clear roles and

7 - NPSF Patient Safety Curriculum. Module 2: Advancing Patient Safety through Systems Thinking and Design.
responsibilities; mistake-proofing principles to include standardization and simplification, and, when
needed, redundancies; a measurement strategy to communicate performance over time; and finally, team
members who share collective responsibility for one thing – safe, reliable care.

Let’s return to the three cases that we used to kick this module off. As you are listening to them again,
remember that we noted while all three of these cases were quite different, they also had a lot in
common. In this case, what I want you to think about is whether or not one of the things they have is that
they are missing one or more of the core attributes of safe, reliable systems we just outlined.

In the first case, a 35-year-old male was seen for his first visit with a new provider and to seek a referral to
ophthalmology. At the end of the visit, the provider was reviewing lab results for a different patient, which
showed positive fecal blood and anemia, suggestive of an ulcer or cancer, and he mistakenly entered a
GI referral for this patient to the lab for colonoscopy. The referral stated that the patient had a cult-positive
stool and mild anemia. The patient took off work and prepared himself for the procedure. The day after
his colonoscopy, the patient called the provider who made the referral and asked why he was tested for
blood in his stool and if he really needed the procedure in the first place. It was at this time identified that
the patient had undergone an unnecessary procedure, and that it had in fact been ordered on the wrong
patient.

In the second case, a 70-year-old female underwent inpatient surgery for stress incontinence, swollen
bladder, and a prolapsed rectum. Vaginal packing was placed during the surgery and accurately
accounted for in the operating room. The patient went to the recovery area and was to be discharged
directly home from there. Her doctor wrote an order to remove the vaginal packing prior to her discharge,
but he did not communicate to the patient or her daughter who was waiting. The RN assigned to the
patient was a male. He was working with a female nurse who offered to remove the patient’s foley
catheter and vaginal packing. She removed the foley but got interrupted and forgot to remove the
packing. The male nurse assigned to the patient assumed that his colleague had removed the packing
and did not double check this. The patient followed up with her primary care provider seven days later
because of vaginal pain and gauze protruding from her vagina.

And, finally, in the third case, at a mid-sized hospital with a nearby primary care office, a 52-year-old with
a history of GI bleeds and ulcers as a result of taking NSAIDs was mistakenly prescribed an NSAID,
developed a GI bleed, and had to be admitted.

While more information about each case would be helpful, I’m guessing that you might suspect what was
common in these cases is one or more of the following attributes was missing: well-defined workflows
with clear roles and responsibilities; mistake-proofing principles to include standardization and
simplification and, when needed, redundancies; a measurement strategy to communicate performance
over time; and team members who shared collective responsibility for one thing – safe, reliable care.
It’s been said that every system is perfectly designed to achieve the results it gets, and ours are no
different. When the types of adverse outcomes that we just reviewed occur, it’s not necessarily by
accident, it’s by design. In this case, poor design. With that assertion in mind, I’d like to leave you with
one final thought: your role as a leader in patient safety includes improving outcomes through systems
thinking and design. You don’t have to be a human factors engineer or a systems safety expert, but you
do have to believe that to err is human, that vigilance and hard work are really important but not sufficient,
and that by applying your knowledge of safety science to practice, your organization can and will create a
safer, more reliable care experience for your patients.

This module provided you a foundation from which to do exactly that. With an understanding of the
materials and concepts in the other modules in this course, you should be well-positioned to help improve
the safety and reliability of care everywhere, every day, for every patient. On behalf of the National
Patient Safety Foundation, thank you for participating in this course, and for the job you do to improve
patient care. It is incredibly important and meaningful work. And while you may never get a thanks from a
patient for helping to prevent something that didn’t occur, I hope that it brings you joy knowing that your
contribution makes a meaningful difference both in their lives and the lives of the practitioners who care
for them.

8 - NPSF Patient Safety Curriculum. Module 2: Advancing Patient Safety through Systems Thinking and Design.

You might also like