Ch 04 Problems Health Care Quality Management Tools and Applications

Background

The pursuit of enhanced quality and safety is a fundamental imperative across the entire healthcare spectrum.1, 2 High-quality healthcare is characterized by “the extent to which health services for individuals and populations improve the likelihood of desired health outcomes and align with current professional understanding”3 (p. 1161). The pivotal Institute of Medicine (IOM) report, To Err Is Human,4 highlighted that most medical errors stem from systemic flaws rather than individual negligence. Inefficiencies, process variations, evolving patient demographics, insurance complexities, diverse provider backgrounds, and numerous other elements contribute to the intricate nature of healthcare. The IOM underscored that the healthcare industry operates below its potential and outlined six core aims for healthcare improvement: effectiveness, safety, patient-centeredness, timeliness, efficiency, and equity.2 Effectiveness and safety are specifically addressed through process-of-care measures, which evaluate whether healthcare providers utilize procedures proven to achieve desired outcomes and avoid harmful practices. The core objectives of healthcare quality measurement are to ascertain the impact of healthcare on intended results and to evaluate adherence to evidence-based processes or professional consensus, while respecting patient preferences.

Systemic or process failures are the root causes of errors.5 Therefore, adopting process-improvement methodologies is crucial for pinpointing inefficiencies, ineffective care delivery, and preventable errors, ultimately driving system-level changes. These methodologies invariably involve performance assessment and the application of findings to inform improvement. This chapter will explore various quality improvement strategies and tools—including Failure Mode and Effects Analysis (FMEA), Plan-Do-Study-Act (PDSA), Six Sigma, Lean, and Root Cause Analysis (RCA)—that have been instrumental in enhancing healthcare quality and safety.

Measures and Benchmarks

Measuring improvement efforts is essential to validate their effectiveness. This measurement should demonstrate: “(1) progress towards desired primary outcomes, (2) any unintended consequences in other system areas, and (3) the need for further adjustments to maintain processes within acceptable limits”6 (p. 735). The fundamental principle behind quality improvement measurement is that superior performance reflects high-quality practice, and performance comparisons among providers and organizations foster a drive for enhancement. Recent years have witnessed a significant increase in the measurement and reporting of healthcare system and process performance.1, 7–9 Public reporting of quality performance can pinpoint areas needing improvement and establish national, state, or other benchmarks.10, 11 However, some providers express concerns about publicly available comparative performance data.12 Consumers, another target audience for these reports, often struggle to interpret the data effectively, limiting the reports’ intended impact on informed decisions for higher-quality care.13–15

The inherent complexity of healthcare systems and service delivery, the unpredictable nature of health conditions, and the specialized and interdependent roles of clinicians and systems16–19 make quality measurement a challenging endeavor. One key challenge is accounting for the variability in cognitive reasoning, discretionary decision-making, problem-solving, and experiential knowledge inherent in healthcare practice.20–22 Another measurement hurdle is discerning whether a near miss had the potential for harm, or whether an adverse event was an isolated incident or indicative of a recurring risk.23

Leading organizations like the Agency for Healthcare Research and Quality (AHRQ), the National Quality Forum, and The Joint Commission advocate for using reliable and valid quality and patient safety measures to drive healthcare improvement. Resources such as AHRQ’s National Quality Measures Clearinghouse (http://www.qualitymeasures.ahrq.gov) and the National Quality Forum’s website (http://www.qualityforum.org) offer numerous measures applicable across diverse care settings and processes. These measures are typically developed through a rigorous process involving evidence assessment from peer-reviewed literature, validity and reliability evaluations, optimal usage determination (e.g., risk adjustment needs), and practical testing.24, 25

Quality and safety measures are instrumental in tracking the progress of improvement initiatives against external benchmarks. Healthcare benchmarking is defined as the ongoing, collaborative process of measuring and comparing key work process outcomes against top performers26 to evaluate organizational performance. Two primary benchmarking types are relevant: internal and external. Internal benchmarking identifies best practices within an organization, compares practices across different departments, and tracks performance trends over time. Data can be visualized using control charts with statistically defined limits. However, internal benchmarking alone may not reflect external best practices. Competitive or external benchmarking involves using comparative data from other organizations to assess performance and identify successful improvement strategies implemented elsewhere. Comparative data is available from national sources like AHRQ’s National Health Care Quality Report1 and National Healthcare Disparities Report,9 and proprietary benchmarking entities such as the American Nurses Association’s National Database of Nursing Quality Indicators.

Quality Improvement Strategies

Over four decades ago, Donabedian27 proposed a framework for measuring healthcare quality by assessing structure, processes, and outcomes. Structure measures evaluate resource accessibility, availability, and quality, encompassing elements like health insurance coverage, hospital bed capacity, and the proportion of nurses with advanced certifications. Process measures assess the delivery of healthcare services by clinicians and providers, such as adherence to clinical guidelines for diabetes management. Outcome measures reflect the ultimate impact of healthcare, influenced by environmental and behavioral factors, and include metrics like mortality rates, patient satisfaction levels, and overall health status improvement.

Two decades later, healthcare leaders drew inspiration from Deming’s work28 in revitalizing Japanese manufacturing post-World War II. Deming, the originator of Total Quality Management (TQM), championed “constancy of purpose” alongside systematic analysis and measurement of process steps in relation to capacity or outcomes. TQM is an organizational philosophy emphasizing organizational management, teamwork, defined processes, systems thinking, and a culture of change to cultivate an environment conducive to continuous improvement. This approach emphasizes that organizational-wide commitment to quality and improvement is essential for optimal results.29

In healthcare, continuous quality improvement (CQI) is often used interchangeably with TQM. CQI is a methodology for refining clinical practice30 rooted in the principle that every process and every interaction offers opportunities for enhancement.31 Many hospital quality assurance (QA) programs typically focus on issues highlighted by regulatory or accreditation bodies, such as documentation reviews, oversight committee evaluations, and credentialing process audits.32 Clinical practice improvement (CPI), as described by Horn and colleagues, is another strategy, defined as a “multidimensional outcomes methodology directly applicable to individual patient clinical management”33 (p. 160). CPI, a clinician-led approach, aims for a holistic understanding of healthcare delivery complexities, employing teams, defining objectives, gathering data, evaluating findings, and translating insights into practice modifications. Management and clinician dedication and engagement are consistently identified as crucial for successful change implementation across these models.34–36 Other quality improvement strategies emphasize the need for management to demonstrate project confidence, communicate purpose clearly, and empower staff.37

Over the past two decades, quality improvement methods have emphasized “identifying processes with suboptimal outcomes, measuring key performance indicators, employing rigorous analysis to devise new approaches, integrating redesigned processes, and reassessing performance to confirm improvement success”38 (p. 9). Beyond TQM, other quality improvement frameworks have emerged, including ISO 9000, Zero Defects, Six Sigma, Baldridge, and Toyota Production System/Lean Production.6, 39, 40

Quality improvement is defined as “systematic, data-driven activities designed to achieve immediate enhancements in healthcare delivery within specific settings”41 (p. 667). A quality improvement strategy is “any intervention aimed at narrowing the quality gap for patient populations representative of routine clinical practice”38 (p. 13). Shojania and colleagues38 developed a taxonomy of quality improvement strategies (refer to Table 1), suggesting that the selection of a specific strategy and methodology should be tailored to the unique requirements of each quality improvement project. A wealth of additional quality improvement tools and strategies are accessible through AHRQ’s quality tools website (www.qualitytools.ahrq.gov) and patient safety website (www.patientsafety.gov).

Quality improvement projects and strategies differ from research endeavors. While research seeks to evaluate and address problems to generate broadly applicable results, quality improvement projects may involve smaller sample sizes, frequent intervention adjustments, and the adoption of promising strategies as they emerge.6 Reinhardt and Ray’s literature review on the distinctions between quality improvement and research42 proposed four differentiating criteria: (1) quality improvement applies research findings to practice, whereas research develops new interventions; (2) quality improvement poses minimal risk to participants, while research may involve participant risks; (3) quality improvement primarily benefits the organization, with potentially limited generalizability, while research aims for broad applicability across similar organizations; and (4) quality improvement data is organization-specific, while research data originates from multiple organizations.

Limited scientific literature on health services has hindered the widespread acceptance of quality improvement methods in healthcare,43, 44 but rigorous studies are increasingly emerging. A quality improvement project can be considered more research-like when it involves practice changes, affects patient outcomes, employs randomization or blinding, and exposes patients to additional risks or burdens—all in pursuit of generalizable knowledge.45–47 Regardless of its classification, any project involving human subjects must prioritize participant protection through respect, informed consent, and scientific rigor.41, 46, 48

Plan-Do-Study-Act (PDSA)

The Plan-Do-Study-Act (PDSA) model is a valuable framework for quality improvement projects and studies aimed at driving positive changes in healthcare processes and achieving favorable outcomes. This methodology, extensively utilized by the Institute for Healthcare Improvement for rapid cycle improvement,31, 49 is characterized by its cyclical approach to implementing and evaluating change. PDSA is most effective when applied in small, frequent cycles rather than large, infrequent ones,50 allowing for iterative refinement before system-wide implementation.31, 51

PDSA-driven quality improvement seeks to establish a functional or causal link between process modifications (specifically behaviors and capabilities) and desired outcomes. Langley and colleagues51 recommend asking three key questions before initiating PDSA cycles: (1) What is the project’s aim? (2) How will success be measured? and (3) What actions will be taken to achieve the aim? The PDSA cycle begins with defining the problem’s nature and scope, identifying potential and necessary changes, outlining a specific change plan, determining stakeholder involvement, selecting metrics to assess change impact, and defining the target implementation area. Change is then implemented, and data and information are collected. The implementation study results are analyzed and interpreted by reviewing key success or failure indicators. Finally, action is taken based on the results – either fully implementing the change or restarting the cycle for further refinement.51

Six Sigma

Six Sigma, initially a business strategy, focuses on process improvement, design, and monitoring to minimize waste, enhance satisfaction, and improve financial stability.52 Process performance, or process capability, serves as the metric for improvement, comparing baseline capability (pre-improvement) to post-improvement capability after piloting potential solutions.53 Two primary Six Sigma methods exist. One method evaluates process outcomes by counting defects, calculating defects per million opportunities (DPMO), and using statistical tables to convert DPMO to a sigma (σ) metric. This applies to pre-analytical and post-analytical processes (pre-test and post-test phases). The second method predicts process performance by estimating process variation and calculating a σ metric based on defined tolerance limits and observed process variation. This is suited for analytical processes where precision and accuracy can be experimentally determined.

A core component of Six Sigma is the structured, disciplined, and rigorous five-phase DMAIC (Define, Measure, Analyze, Improve, and Control) approach.53, 54 It starts with project identification, historical data review, and scope definition. Next, continuous total quality performance standards are selected, performance objectives are defined, and variability sources are identified. As the project progresses, data is collected to assess process improvements. Validated measures are developed to determine the capability of the new process, supporting data analysis.

Six Sigma and PDSA are interconnected. DMAIC builds upon Shewhart’s Plan, Do, Check, Act cycle.55 PDSA phases align with Six Sigma elements: PDSA’s “Plan” phase corresponds to Six Sigma’s “Define” (core processes, key customers, requirements); PDSA’s “Do” to Six Sigma’s “Measure” (performance); PDSA’s “Study” to Six Sigma’s “Analyze”; and PDSA’s “Act” to Six Sigma’s “Improve and Control”.56

Toyota Production System/Lean Production System

The Toyota Production System, initially applied in Toyota car manufacturing,57 evolved into the Lean Production System, or Lean methodology. While overlapping with Six Sigma, Lean focuses on customer needs and process improvement by eliminating non-value-added activities (waste). Lean methodology involves maximizing value-added activities in optimal sequences for continuous operations.58 Root cause analysis is essential for investigating errors, improving quality, and preventing recurrences in this system.

Physicians, nurses, technicians, and managers are increasingly applying Lean principles to enhance patient care effectiveness and reduce costs in pathology labs, pharmacies,59–61 and blood banks.61 Reviews of Toyota Production System projects in healthcare reveal improvements in patient safety and care quality through systematic problem definition, root cause analysis, goal setting, ambiguity and workaround elimination, and responsibility clarification. Project teams developed action plans to improve, simplify, and redesign work processes.59, 60 Spear highlights that the Toyota Production System method clarifies “which patient gets which procedure (output); who does which aspect of the job (responsibility); exactly which signals indicate work initiation (connection); and precisely how each step is performed”60 (p. 84).

Successful Lean implementation in healthcare involves eliminating unnecessary daily activities linked to “overcomplicated processes, workarounds, and rework”59 (p. 234), engaging frontline staff, and rigorously tracking problems throughout the problem-solving process.

Root Cause Analysis

Root cause analysis (RCA), extensively used in engineering62 and similar to critical incident technique,63 is a structured investigation and problem-solving approach focused on identifying the fundamental causes of events and potential intercepted events. The Joint Commission mandates RCA for all sentinel events, expecting organizations to develop and implement action plans based on RCA findings to mitigate future risks and monitor improvement effectiveness.64

RCA is a technique for identifying trends and assessing risk when human error is suspected,65 recognizing that system factors, not individuals, are typically the root cause of problems.2, 4 Critical incident technique follows a similar approach, collecting information on event causes and contributing actions post-event.63

RCA is a reactive assessment initiated after an event, retrospectively outlining the sequence of events, charting causal factors, and identifying root causes for thorough event examination.66 Being labor-intensive, RCA ideally involves a multidisciplinary team trained in RCA, triangulating findings and enhancing validity.67 Aggregate RCA, used by the Veterans Affairs (VA) Health System, efficiently utilizes staff time by focusing on trend assessment across multiple simultaneous RCAs, rather than in-depth case assessments.68

RCA, a qualitative process, aims to uncover error root causes by examining enabling factors (e.g., lack of training), latent conditions (e.g., failing to check patient ID bands), and situational factors (e.g., two patients with the same last name) contributing to adverse events (e.g., adverse drug events). Investigators ask questions like: what happened, why, what were proximate causes, why did those factors occur, and what underlying systems and processes exist? Answers help identify ineffective safety barriers and problem causes, enabling future prevention. Considering immediate pre-event factors is crucial, as remote factors may also contribute.68

The final RCA step involves developing system and process improvement recommendations based on investigation findings.68 This is critical, as literature reviews suggest RCA alone may not improve patient safety.69 The VA’s aggregate RCA process is a nontraditional strategy, examining multiple cases simultaneously for specific event categories.68, 70

Given the diverse nature of adverse events and error root causes, differentiating system from process factors without individual blame is essential. Errors rarely stem from irresponsibility, neglect, or intent,[71](#ch44.r71] as supported by the IOM.4, 72 Categorizing individual errors, like using the Taxonomy of Error Root Cause Analysis of Practice Responsibility (TERCAP),73 might distract from investigating modifiable system and process factors. Even individual factors can often be addressed through education, training, and error-prevention mechanisms.

Failure Modes and Effects Analysis

Errors are inevitable and unpredictable. Failure modes and effects analysis (FMEA) is a proactive evaluation technique used to identify and eliminate potential failures, problems, and errors in systems, designs, processes, and services before they occur.74–76 Developed for the U.S. military and used by NASA, FMEA predicts and evaluates potential failures and hazards (e.g., probabilistic occurrences) and proactively identifies process steps to reduce or eliminate future failures.77 FMEA aims to prevent errors by identifying potential failure modes, estimating their probability and consequences, and implementing preventative actions. In healthcare, FMEA focuses on care systems and uses multidisciplinary teams to evaluate processes from a quality improvement perspective.

FMEA can evaluate alternative processes or procedures and monitor change over time. Monitoring change requires well-defined measures providing objective process effectiveness data. In 2001, The Joint Commission mandated accredited healthcare providers to conduct proactive risk management, identifying and predicting system weaknesses and adopting changes to minimize patient harm on one or two high-priority topics annually.78

HFMEA

The health failure modes and effects analysis (HFMEA) tool, developed by the VA’s National Center for Patient Safety, is used for risk assessment. HFMEA involves five steps: (1) define the topic; (2) assemble the team; (3) develop a process map, numbering each step and substep; (4) conduct hazard analysis (identify failure modes, score using a hazard matrix, decision tree analysis);79 and (5) develop actions and desired outcomes. Hazard analysis requires listing all possible failure modes for each process step, determining action necessity, and listing causes for modes needing further action. Post-hazard analysis, consider required actions and outcome measures, including what to eliminate or control and action responsibilities.79

Research Evidence

This analysis included fifty studies and quality improvement projects, categorized by quality method: FMEA, RCA, Six Sigma, Lean, and PDSA. Common themes emerged regarding implementing quality improvement strategies, evaluating change intervention impacts, and the use of quality improvement tools in healthcare.

What Was Needed To Implement Quality Improvement Strategies?

Strong leadership support,80–83 involvement,81, 84 consistent commitment to continuous quality improvement,85, 86 and visible leadership, both written and physical,86 were vital for significant changes. Hospital board commitment was also essential.86, 88 Resource demands of process changes required senior leadership to: (1) ensure adequate financial resources87–89 by identifying funding for training, innovative technologies,90 and equipment;91 (2) facilitate key player involvement by providing necessary time and administrative support;85, 88, [89](#ch44.r89], 90 (3) support time-consuming projects with sufficient time allocation;86, 92 and (4) prioritize safety organizationally and reinforce expectations, especially during delays or unrealized results.87 Leaders also needed to understand high-level decisions’ impact on work processes and staff time,88 integrating quality improvement into system-wide leadership development.[88](#ch44.r88] Leadership should make patient safety a key aspect of meetings and strategies,85, 86 formally identify annual patient safety goals, and hold themselves accountable for patient safety outcomes.85

Despite strong leadership, organizational hesitancy towards quality improvement may arise from past failed change attempts,93 lack of organization-wide commitment,94 poor relationships, and ineffective communication.89 However, these barriers lessen with organizational embrace of change,95 culture shifts enabling change,90 and active pursuit of a safety and quality improvement culture. Adopting a nonpunitive change culture takes time,61, 90 sometimes involving legal departments to focus on systems, not individuals.96 Staff comfort with process improvement increases with realized cost savings and no-layoff policies protecting job security amidst efficiency gains.84

Improvement processes require stakeholder engagement97 and understanding that quality improvement investments can be recouped through efficiency gains and fewer adverse events.86 Stakeholders were used to: (1) prioritize safe practices through consensus development86, 98 focusing on clinically significant hazards with substantial patient safety impact; (2) develop solutions addressing interdisciplinary communication and teamwork, crucial for safety culture; and (3) build on other hospitals’ successes.86 Successful rapid-cycle collaboratives used stakeholders to choose subjects, define objectives, roles, expectations, motivate teams, and utilize data analysis results.86 Considering diverse stakeholder perspectives is important.97 Expected opinion variations99 and buy-in challenges necessitate early stakeholder involvement, feedback solicitation,100 and support for critical process changes.101

Communication and information sharing with stakeholders and staff are crucial for specifying quality initiative purposes and strategies;101 developing open communication across disciplines and leadership/staff levels, allowing for concerns and observations during change processes;88 ensuring patient and family inclusion; fostering a sense of team membership and patient safety responsibility; sharing root cause analysis lessons; and gaining attention and buy-in through patient safety stories and celebrating successes.85 However, some staff struggled to accept system changes based on data, despite efforts to keep everyone informed.89

Successful strategies depended on motivated80 and empowered teams. Multidisciplinary teams reviewing data and leading change offered numerous advantages.91 These teams needed the right staff,91, 92 peer inclusion,102 stakeholder engagement (managers to staff), and senior leadership support.85, 86 Specific stakeholders (e.g., nurses, physicians) had to be involved81 and supported as champions[103](#ch44.r103] and problem-solvers within departments[59](#ch44.r59] for interventions to succeed. Considering frontline staff attitudes and willingness for specific improvements[59](#ch44.r59], [88](#ch44.r88], [104](#ch44.r104] was crucial, as quality initiatives require substantial clinician work changes.[86](#ch44.r86]

Other success factors included adaptable protocols to patient needs93 and unit experience, training, and culture.[88](#ch44.r88] Defining and testing different approaches is vital; diverse approaches can converge.81 Mechanisms facilitating staff buy-in included highlighting error types and causes, making errors visible,[102](#ch44.r102] involving staff in work assessment and waste identification,[59](#ch44.r59] providing insight into project feasibility and measurable impact,[105](#ch44.r105] and presenting evidence-based changes.100 Physician leadership[106](#ch44.r106] or active involvement[86](#ch44.r86] was particularly needed, especially when physician behaviors caused inefficiencies.84 Some projects recruited physician champions to promote patient safety and integrate it into leadership and medical management strategies.[85](#ch44.r85]

Team leaders and team composition were also important. Team leaders emphasizing relationship building were essential for team success.83, 93 Dedicated team leaders with significant project time were needed.84 While leader types varied, one project had physician-administrator co-chairs.83 Visible champions influenced initiative visibility.100 Multidisciplinary teams needed to understand quality improvement steps and error opportunities to prioritize critical improvements within complex processes, reducing analysis subjectivity. Multidisciplinary team structures allowed members to identify steps from their perspectives, anticipate barriers, generate diverse ideas, and promote team building through discussions.100, 107 FMEA/HFMEA minimized group biases by leveraging team diversity and structured goal outlines.107, 108

Teams needed preparation and enablement through ongoing education, weekly debriefings, problem and principle reviews,84 and continuous monitoring and feedback.92, [95](#ch44.r95] Staff and leadership education80, 95, [101](#ch44.r101], [104](#ch44.r104] on the problem, quality tools, planned changes, and project updates were key strategies.[92](#ch44.r92] Training was ongoing,[91](#ch44.r91] focused on skill deficits,[82](#ch44.r82] and revised based on project data analysis.109 Assuming senior staff needed no training was incorrect.[105](#ch44.r105] Consultants or facilitators with advanced quality improvement knowledge were valuable resources for inexperienced teams.[106](#ch44.r106] Hospital-community interface models coupled with education programs were also considered.97

Teamwork processes improved interdepartmental relationships.89 Effective team building,[110](#ch44.r110] rapid-cycle (PDSA) model implementation, frequent meetings, and monthly outcome data monitoring were crucial.86 Effective teamwork, communication, information transfer, interdepartmental coordination, and organizational culture changes were essential for team effectiveness.86 Competing workloads, like double shifts, dampened team member engagement.[97](#ch44.r97] Improved role understanding is a valuable project outcome, providing a basis for further practice development.[97](#ch44.r97] Team motivation stemmed from continuous progress sharing, success, and achievement celebration.87

Teamwork offers advantages, but only a few were highlighted in reviewed reports. Teams broadened knowledge, improved interdisciplinary communication, and facilitated problem learning.111 Teams were proactive,91 integrating tools improving technical processes and relationships,[83](#ch44.r83] and worked to understand situations, define problems, pathways, tasks, connections, and develop multidisciplinary action plans.59 However, teamwork wasn’t always easy. Group work was seen as difficult and time-consuming,111 with consensus delays due to conflicting preferences.[97](#ch44.r97] Team members needed to learn group dynamics, peer confrontation, conflict resolution, and addressing detrimental behaviors.111

What Was Learned From Evaluating the Impact of Change Interventions?

As Berwick suggested,112 quality improvement initiative leaders found simplification,96, 104 standardization,104 stratification, improved auditory communication, communication supporting authority gradients,96 proper default use, cautious automation,96 affordance and natural mapping (easy right actions), respect for vigilance and attention limits,96 and near-hit/error/hazard reporting encouragement96 were crucial for success. Policy and procedure revision and standardization enabled new processes to be easier than old ones, reducing human error related to limited vigilance and attention.78, 80–82, 90–92, 94, 96, 102, 103, 113, 114

Simplification and standardization effectively acted as forcing functions, reducing reliance on individual decision-making. Several initiatives standardized medication ordering and administration protocols,78, 87, 101, 103, 106–108, 109, 114–116 improving patient outcomes and nurse efficiency/effectiveness.103, 106, 108, 109, 114–116 One initiative used standardized blood product ordering forms.94 Four improved pain assessment and management using standardized metrics and tools.80, 93, 100, 117 Simplification and standardization proved effective in these initiatives.

Information technology (IT) offers benefits for checks, defaults, and automation to improve quality and reduce errors, embedding forcing functions to eliminate error possibilities.96, 106 Human error effects can be mitigated through redundancy, like double-checking for certain errors, engaging two skilled practitioners’ knowledge61, 101 and effectively reducing dosing errors.78 IT successfully: (1) reduced human error through automation;61 (2) standardized medication concentrations78 and dosing via computer calculations,115, 116 standardized protocols,101 and order clarity;116 (3) assisted caregivers with alerts and reminders; (4) improved medication safety (barcoding, CPOE); and (5) tracked performance through database integration and indicator monitoring. Workflow and procedures often needed revision to align with technology.78 Technology implementation implied organizational commitment to investment for improvement,85 but data collection resource limitations hindered analysis and evaluation in two initiatives.93, 97

Data and information were needed to understand error and near-error root causes,99 adverse event magnitude,106 track performance,84, 118 and assess initiative impacts.61 Near-miss, error, and hazard reporting needs encouragement.96 Error reporting is generally low and influenced by organizational culture106 and bias, skewing results.102 Organizations not prioritizing reporting or safety culture may underreport patient harm or near misses (see Chapter 35. “Evidence Reporting and Disclosure”). Data analysis is critical, yet staff may benefit from education on effective analysis and display.106 Transparent feedback on findings82 effectively brought patient safety to the forefront.107 Data absence, whether due to non-reporting or non-collection, hindered statistical impact analysis115 and cost-benefit assessments.108 Multi-organizational collaboration should utilize a common database.98

Measures and benchmarks enhance data understanding. Repeated measurements tracked progress,118 but only with clear success metrics.83 Measures can engage more clinicians, especially physicians. Objective, broader, and better measures marked progress and provided a basis for action and celebration.106 When care process measures were used, demonstrating the link between process changes and outcomes was essential.61

Multiple measures and improved documentation facilitated initiative impact assessment on patient outcomes.93 Some investigators suggested hospital administrators should encourage initiative evaluations focusing on comprehensive models assessing patient outcomes, satisfaction, and cost-effectiveness.114 Outcome assessment can be enhanced by setting realistic goals, not unrealistic 100% change targets,119 and comparing results to state, regional, and national benchmarks.61, 88

Initiative cost was a crucial factor, even when adverse effects necessitated rapid change.106 Implementing readily feasible changes106 with minimal practice disruption is important.99 Replicating initiatives in other units or sites should be considered.99 Standardization enhances replication but may incur costs.106 Faster resolution of small problems accelerates system-wide replication.84, 106 Low-cost, effective recommendations were implemented quickly.93, 107 Some investigators claimed cost and length-of-stay reductions,103 but lacked data. Change costs may be recouped through ROI or reduced patient risk and liability costs.61

Staff education is vital. Pain management initiatives showed staff education on guidelines and chronic pain protocols improved understanding, assessment, documentation, patient/family satisfaction, and pain management.80, 93 Another initiative educating nurses on IV site care and central line assessment improved satisfaction, reduced complications and costs.109

Despite initiative benefits, implementation challenges included:

Despite challenges, perseverance and focus are crucial; new processes are difficult to introduce,84, 100 but quality improvement rewards the effort.84 Quality improvement initiatives are time-consuming, tedious, and difficult for action-oriented individuals; they require extensive resource investment (time, money, energy);94 and involve trial and error.91 Celebrating victories is important given these challenges.84

Sustaining changes post-implementation is crucial.105 Quality improvement initiatives should be integral to ongoing organizational improvement. Successful factors included easy-to-use bedside practice changes;82 simple communication strategies;88 maximized project visibility to sustain momentum;[100](#ch44.r100] establishing safety culture; and strengthening organizational and technological infrastructure.121 However, there were differing views on spreading specific change steps versus adapting best practices.106, 121 Generating enthusiasm for change through internal and external collaboration[103](#ch44.r103] and healthy competition is also important. Collaboratives can encourage evidence-based practice, rapid-cycle improvement, and consensus on better practices.86, 98

What Is Known About Using Quality Improvement Tools in Health Care?

Quality tools effectively defined and assessed healthcare problems, prioritizing quality and safety issues[99](#ch44.r99] and focusing on systems,[98](#ch44.r98] not individuals. Tools addressed errors, rising costs,88 and changed provider practices.[117](#ch44.r117] Several initiatives used multiple tools, starting with RCA then using Six Sigma, Lean, or PDSA for process changes. Almost every initiative pre-tested/pilot tested.92, 99 Investigators and leaders reported advantages of specific tools:

Root Cause Analysis (RCA): Useful for assessing reported errors/incidents, differentiating active and latent errors, identifying policy/procedure changes, and suggesting system improvements, including risk communication.82, 96, 102, 105

Six Sigma/Toyota Production System: Successfully decreased defects/variations59, 61, 81 and operating costs81 and improved outcomes in various healthcare settings and processes.61, 88 Six Sigma clearly differentiated variation causes and process outcome measures.61 Workarounds and rework were difficult due to targeting pre-implementation process root causes.59, 88 Teams improved implementation effectiveness and results with increased experience.84 Effective use requires substantial leadership time and resources, associated with improved patient safety, lower costs, and increased job satisfaction.84 Six Sigma was also important for problem-solving, continuous improvement, clear problem communication, implementation guidance, and objective results.59

Plan-Do-Study-Act (PDSA): Predominantly used for gradual initiative implementation and iterative improvement. Rapid-cycle PDSA started with piloting new processes, examining results, problem-solving, adjusting, and initiating subsequent cycles. Small, rapid PDSA cycles were more successful for intervention goals, allowing early process changes80 without distraction by details and unknowns.87, 119, 122 Team PDSA success improved with instruction, training, baseline measurement feedback,118 regular meetings,120 and collaboration (including patients and families)80 towards common goals.87 Conversely, some teams struggled with rapid-cycle change, data collection, and run charts,[86](#ch44.r86] and applying simple rules in PDSA cycles may be more successful in complex systems.93

Failure Modes and Effects Analysis (FMEA): Used prospectively to identify potential failure areas94 for assessing process characterization at desired change speed,115 and retrospectively to characterize process safety by identifying failure areas and learning from staff perspectives.[94](#ch44.r94] Process flowcharts focused teams and ensured shared understanding.94 FMEA data prioritized improvement strategies, benchmarked improvement efforts,116 educated and rationalized practice change diffusion,115 and enhanced team change facilitation across hospital services and departments.124 FMEA facilitated systematic error management, crucial for complex processes and settings, dependent on multidisciplinary approaches, incident/error reporting integration, decision support, terminology standardization, and caregiver education.116

Health Failure Modes and Effects Analysis (HFMEA): Provided detailed analysis of smaller and larger processes, resulting in specific recommendations. HFMEA was a valid proactive analysis tool, thoroughly analyzing vulnerabilities (failure modes) pre-adverse events.108 It identified the multifactorial nature of errors108 and potential error risks,111 but was time-consuming.107 HFMEA minimized group biases through multidisciplinary teams78, 108, 115 and facilitated teamwork with step-by-step processes,[107](#ch44.r107] but required paradigm shifts for many.111

Evidence-Based Practice Implications

Several themes emerged from successful quality improvement strategies and projects that nurses can use to guide their efforts. The strength of these practice implications is linked to the methodological rigor and generalizability of the strategies and projects:

  1. Leadership Commitment and Support: Strong leadership commitment and support are paramount. Leaders must empower staff, be actively involved, and consistently drive quality improvement. Without senior leadership commitment, even well-intentioned projects risk failure. Champions within leadership positions and on teams are crucial.
  2. Culture of Safety and Improvement: Cultivating a culture that rewards improvement and prioritizes quality is vital. This culture should support a quality infrastructure with the resources and human capital needed for successful quality improvement.
  3. Stakeholder Involvement: Engaging the right stakeholders in quality improvement teams is essential.
  4. Multidisciplinary Teams and Strategies: Due to healthcare complexity, multidisciplinary teams and strategies are indispensable. Teams from different centers/units need close collaboration, utilizing communication strategies like meetings, calls, and dedicated email lists, and leveraging trained facilitators and experts when possible.
  5. Problem and Root Cause Understanding: Quality improvement teams and stakeholders must thoroughly understand the problem and its root causes. Consensus on problem definition and a universally agreed-upon metric are crucial for success, as is data validity.
  6. Proven, Methodologically Sound Approach: Employing a proven, methodologically sound approach is essential, without being overwhelmed by quality improvement jargon. Clear models, terms, and processes are critical, especially given the interrelation of quality tools; using a single tool is insufficient.
  7. Standardized Care Processes: Standardizing care processes and ensuring adherence should enhance efficiency, effectiveness, and improve organizational and patient outcomes.
  8. Evidence-Based Practice: Evidence-based practice facilitates ongoing quality improvement efforts.
  9. Flexible Implementation Plans: Implementation plans must be flexible to adapt to emerging needs and changes.
  10. Multiple Purposes of Change Efforts: Efforts to change practice and improve care quality can serve multiple purposes, including redesigning processes for efficiency and effectiveness, enhancing customer satisfaction, improving patient outcomes, and improving organizational climate.
  11. Appropriate Technology Use: Appropriate technology use can enhance team functioning, collaboration, reduce human error, and improve patient safety.
  12. Sufficient Resources: Initiatives require sufficient resources, including protected staff time.
  13. Continuous Data Collection and Analysis: Continuously collect and analyze data and communicate results on critical indicators across the organization. The ultimate goal of quality assessment and monitoring is to use findings to evaluate performance and identify areas needing improvement.
  14. Time for Change: Change takes time, so sustained focus and perseverance are essential.

Research Implications

Given healthcare complexity, quality improvement assessment is dynamic and challenging. The growing body of knowledge in this area is slow, partly due to the ongoing debate about whether quality improvement initiatives constitute research and meet methodological rigor for publication. Quality improvement methods have existed since Donabedian’s 1966 publication,27 but Six Sigma and published findings are recent in healthcare, often applied to isolated system components, hindering organizational learning and generalizability. Despite the long-standing importance of quality improvement, driven by external forces like CMS and The Joint Commission, numerous organizational efforts may be unpublished and not captured in reviews, potentially not warranting peer-reviewed publication. Researchers, leaders, and clinicians need to define generalizable and publishable quality improvement work to advance knowledge of methods and interventions.

While clinical, functional, patient, and staff satisfaction outcomes were mentioned in reviewed projects, cost and utilization outcomes measurement in quality improvement is important, especially with variation. Key unanswered questions include:

  • How can quality improvement efforts successfully address the needs of patients, insurers, regulators, and staff?
  • What is the best method to prioritize improvements and balance competing stakeholder needs?
  • What variation threshold is needed for consistently desired results?
  • How can bottom-up practice change succeed without senior leadership support or a change-supportive culture?

Researchers should use conceptual models to guide quality improvement initiatives or research, facilitated by quality tools. Generalizing findings requires larger sample sizes through collaboration. Understanding which tools work best, alone or in combination, is crucial. Mixed methods, including non-research methods, may better elucidate quality improvement science complexity. Tailoring implementation interventions’ contribution to process and patient outcomes is poorly understood, as are the most effective steps across intervention strategies. We lack knowledge of which strategies or combinations work for whom, in what context, why they work in some settings but not others, and the underlying mechanisms of strategy effectiveness.

Conclusions

Regardless of the method’s acronym (TQM, CQI) or tool (FMEA, Six Sigma), the essential aspect of quality improvement is a dynamic process often employing multiple tools. Quality improvement success hinges on five key elements: fostering a culture of change and safety, developing a clear problem understanding, involving key stakeholders, testing change strategies, and continuous performance monitoring and reporting to sustain change.

Search Strategy

To identify quality improvement efforts for this systematic review, PubMed and CINAL were searched from 1997 to present using keywords and terms: “Failure Modes and Effects Analysis/FMEA,” “Root Cause Analysis/RCA,” “Six Sigma,” “Toyota Production System/Lean,” and “Plan Do Study Act/PDSA.” 438 articles were retrieved. Inclusion criteria: nursing-related processes; projects/research using FMEA, RCA, Six Sigma, Lean, or PDSA; qualitative and quantitative analyses; and patient outcome reporting. Exclusion criteria: no nursing involvement, insufficient process/outcome information, indirect nursing involvement in patient/study outcomes, or developing country settings. Findings were grouped into common quality improvement themes.

References

1.National Healthcare Quality Report . Rockville, MD: Agency for Healthcare Research and Quality; 2006. [Accessed March 16, 2008]. http://www​.ahrq.gov/qual/nhqr06/nhqr06​.htm.

2.Institute of Medicine. Crossing the quality chasm: a new health system for the 21st century. Washington, DC: National Academy Press; 2001. pp. 164–80. [PubMed: 25057539]

3.Lohr KN, Schroeder SA. A strategy for quality assurance in Medicare. N Engl J Med. 1990;322:1161–71. [PubMed: 2406600]

4.Institute of Medicine. To err is human: building a safer health system. Washington, DC: National Academy Press; 1999.

5.McNally MK, Page MA, Sunderland VB. Failure mode and effects analysis in improving a drug distribution system. Am J Health Syst Pharm. 1997;54:17–7. [PubMed: 9117805]

6.Varkey P, Peller K, Resar RK. Basics of quality improvement in health care. Mayo Clin Proc. 2007;82(6):735–9. [PubMed: 17550754]

7.Marshall M, Shekelle P, Davies H, et al. Public reporting on quality in the United States and the United Kingdom. Health Aff. 2003;22(3):134–48. [PubMed: 12757278]

8.Loeb J. The current state of performance measurement in healthcare. Int J Qual Health Care. 2004;16(Suppl 1):i5–9. [PubMed: 15059982]

9.National Healthcare Disparities Report. Rockville, MD: Agency for Healthcare Research and Quality; 2006. [Accessed March 16, 2008]. Available at: http://www​.ahrq.gov/qual/nhdr06/nhdr06​.htm.

10.Schoen C, Davis K, How SKH, et al. U.S. health system performance: a national scorecard. Health Affiars. 2006:w457–75. [PubMed: 16987933]

11.Wakefield DS, Hendryx MS, Uden-Holman T, et al. Comparing providers’ performance: problems in making the ‘report card’ analogy fit. J Healthc Qual. 1996;18(6):4–10. [PubMed: 10162089]

12.Marshall M, Shekelle PG, Leatherman S, et al. The public release of performance data: what do we expect to gain, a review of the evidence. JAMA. 2000;283:1866–74. [PubMed: 10770149]

13.Schneider EC, Lieberman T. Publicly disclosed information about the quality of health care: response of the U.S. public. Qual Health Care. 2001;10:96–103. [PMC free article: PMC1757976] [PubMed: 11389318]

14.Hibbard JH, Harris-Kojetin L, Mullin P, et al. Increasing the impact of health plan report cards by addressing consumers’ concerns. Health Affairs. 2000 Sept/Oct;19:138–43. [PubMed: 10992661]

15.Bentley JM, Nask DB. How Pennsylvania hospitals have responded to publicly release reports on coronary artery bypass graft surgery. Jt Comm J Qual Improv. 1998;24(1):40–9. [PubMed: 9494873]

16.Ferlie E, Fitzgerald L, Wood M, et al. The nonspread of innovations: the mediating role of professionals. Acad Manage J. 2005;48(1):117–34.

17.Glouberman S, Mintzberg H. Managing the care of health and the cure of disease– part I: differentiation. Health Care Manage Rev. 2001;26(1):56–9. [PubMed: 11233354]

18.Degeling P, Kennedy J, Hill M. Mediating the cultural boundaries between medicine, nursing and management—the central challenge in hospital reform. Health Serv Manage Res. 2001;14(1):36–48. [PubMed: 11246783]

19.Gaba DM. Structural and organizational issues is patient safety: a comparison of health care to other high-hazard industries. Calif Manage Rev. 2000;43(1):83–102.

20.Lee JL, Change ML, Pearson ML, et al. Does what nurses do affect clinical outcomes for hospitalized patients? A review of the literature. Health Serv Res. 1999;29(11):39–45. [PMC free article: PMC1089070] [PubMed: 10591270]

21.Taylor C. Problem solving in clinical nursing practice. J Adv Nurs. 1997;26:329–36. [PubMed: 9292367]

22.Benner P. From novice to expert: power and excellence in nursing practice . Menlo Part, CA: Addison-Wesley; Publishing Company: 1984.

23.March JG, Sproull LS, Tamuz M. Learning from samples of one or fewer. Organizational Science. 1991;2(1):1–13.

24.McGlynn EA, Asch SM. Developing a clinical performance measure. Am J Prev Med. 1998;14(3s):14–21. [PubMed: 9566932]

25.McGlynn EA. Choosing and evaluating clinical performance measures. Jt Comm J Qual Improv. 1998;24(9):470–9. [PubMed: 9770637]

26.Gift RG, Mosel D. Benchmarking in health care. Chicago, IL: American Hospital Publishing, Inc.; 1994. p. 5.

27.Donabedian A. Evaluating quality of medical care. Milbank Q. 1966;44:166–206. [PubMed: 5338568]

28.Deming WE. Out of the Crisis . Cambridge, MA: Massachusetts Institute of Technology Center for Advanced Engineering Study; 1986.

29.Berwick DM, Godfrey AB, Roessner J. Curing health care . San Francisco, CA: Jossey-Bass; 2002.

30.Wallin L, Bostrom AM, Wikblad K, et al. Sustainability in changing clinical practice promotes evidence-based nursing care. J Adv Nurs. 2003;41(5):509–18. [PubMed: 12603576]

31.Berwick DM. Developing and testing changes in delivery of care. Ann Intern Med. 1998;128:651–6. [PubMed: 9537939]

32.Chassin MR. Quality of Care–part 3: Improving the quality of care. N Engl J Med. 1996:1060–3. [PubMed: 8793935]

33.Horn SD, Hickey JV, Carrol TL, et al. Can evidence-based medicine and outcomes research contribute to error reduction? In: Rosenthal MM, Sutcliffe KN, editors. Medical error: what do we know? What do we do? San Francisco, CA: Jossey-Bass; 2002. pp. 157–73.

34.Joss R. What makes for successful TQM in the NHS? Int J Health Care Qual Assur. 1994;7(7):4–9. [PubMed: 10140850]

35.Nwabueze U, Kanji GK. The implementation of total quality management in the NHS: how to avoid failure. Total Quality Management. 1997;8(5):265–80.

36.Jackson S. Successfully implementing total quality management tools within healthcare: what are the key actions? Int J Health Care Qual Assur. 2001;14(4):157–63.

37.Rago WV. Struggles in transformation: a study in TQM, leadership and organizational culture in a government agency. Public Adm Rev. 1996;56(3)

38.Shojania KG, McDonald KM, Wachter RM, et al. Closing the quality gap: a critical analysis of quality improvement strategies, Volume 1–Series Overview and Methodology Technical Review 9 (Contract No 290-02-0017 to the Stanford University–UCSF Evidence-based Practice Center) . Rockville, MD: Agency for Healthcare Research and Quality; Aug, 2004. AHRQ Publication No. 04-0051–1. [PubMed: 20734525]

39.Furman C, Caplan R. Appling the Toyota production system: using a patient safety alert system to reduce error. Jt Comm J Qual Patient Saf. 2007;33(7):376–86. [PubMed: 17711139]

40.Womack JP, Jones DT. Lean thinking . New York: Simon and Schuster; 1996.

41.Lynn J, Baily MA, Bottrell M, et al. The ethics of using quality improvement methods in health care. Ann Intern Med. 2007;146:666–73. [PubMed: 17438310]

42.Reinhardt AC, Ray LN. Differentiating quality improvement from research. Appl Nurs Res. 2003;16(1):2–8. [PubMed: 12624857]

43.Blumenthal D, Kilo CM. A report card on continuous quality improvement. Milbank Q. 1998;76(4):625–48. [PMC free article: PMC2751093] [PubMed: 9879305]

44.Shortell SM, Bennet CL, Byck GR. Assessing the impact of continuous quality improvement on clinical practice: what it will take to accelerate progress. Milbank Q. 1998;76(4):593–624. [PMC free article: PMC2751103] [PubMed: 9879304]

45.Lynn J. When does quality improvement count as research? Human subject protection and theories of knowledge. Qual Saf Health Care. 2004;13:67–70. [PMC free article: PMC1758070] [PubMed: 14757803]

46.Bellin E, Dubler NN. The quality improvement-research divide and the need for external oversight. Am J Public Health. 2001;91:1512–7. [PMC free article: PMC1446813] [PubMed: 11527790]

47.Choo V. Thin line between research and audit. Lancet. 1998;352:1481–6. [PubMed: 9717915]

48.Harrington L. Quality improvement, research, and the institutional review board. J Healthc Qual. 2007;29(3):4–9. [PubMed: 17708327]

49.Berwick DM. Eleven worthy aims for clinical leadership of health care reform. JAMA. 1994;272(10):797–802. [PubMed: 8078145]

50.Berwick DM. Improvement, trust, and the healthcare workforce. Qual Saf Health Care. 2003;12:2–6. [PMC free article: PMC1758027] [PubMed: 14645761]

51.Langley JG, Nolan KM, Nolan TW, et al. The improvement guide: a practical approach to enhancing organizational performance . New York: Jossey-Bass; 1996.

52.Pande PS, Newman RP, Cavanaugh RR. The Six Sigma way . New York: McGraw-Hill; 2000.

53.Barry R, Murcko AC, Brubaker CE. The Six Sigma book for healthcare: improving outcomes by reducing errors . Chicago, IL: Health Administration Press; 2003.

54.Lanham B, Maxson-Cooper P. Is Six Sigma the answer for nursing to reduce medical errors and enhance patient safety? Nurs Econ. 2003;21(1):39–41. [PubMed: 12632719]

55.Shewhart WA. Statistical method from the viewpoint of quality control . Washington, DC: U.S. Department of Agriculture; 1986. p. 45.

56.Pande PS, Newman RP, Cavanagh RR. The Six Sigma was: team field book . New York: McGraw-Hill; 2002.

57.Sahney VK. Generating management research on improving quality. Health Care Manage Rev. 2003;28(4):335–47. [PubMed: 14682675]

58.Endsley S, Magill MK, Godfrey MM. Creating a lean practice. Fam Pract Manag. 2006;13:34–8. [PubMed: 16671348]

59.Printezis A, Gopalakrishnan M. Current pulse: can a production system reduce medical errors in health care? Q Manage Health Care. 2007;16(3):226–38. [PubMed: 17627218]

60.Spear SJ. Fixing health care from the inside, today. Harv Bus Rev. 2005;83(9):78–91. 158. [PubMed: 16171213]

61.Johnstone PA, Hendrickson JA, Dernbach AJ, et al. Ancillary services in the health care industry: is Six Sigma reasonable? Q Manage Health Care. 2003;12(1):53–63. [PubMed: 12593375]

62.Reason J. Human Error . New York: Cambridge University Press; 1990.

63.Kemppainen JK. The critical incident technique and nursing care quality research. J Adv Nurs. 2000;32(5):1264–71. [PubMed: 11115012]

64.Joint Commission. 2003 hospital accreditation standards. Oakbrook Terrace, IL: Joint Commission Resources; 2003.

65.Bogner M. Human Error in Medicine . Hillsdale, NJ: Lawrence Erlbaum Associates; 1994.

66.Rooney JJ, Vanden Heuvel LN. Root cause analysis for beginners. Qual Process . 2004 July; [Accessed on January 5, 2008]; Available at: www​.asq.org.

67.Giacomini MK, Cook DJ. Users’ guides to the medical literature: XXIII. Qualitative research in health care. Are the results of the study valid? Evidence-Based Medicine Working Group. JAMA. 2000;284:357–62. [PubMed: 10891968]

68.Joint Commisssion. Using aggregate root cause analysis to improve patient safety. Jt Comm J Qual Patient Saf. 2003;29(8):434–9. [PubMed: 12953608]

69.Wald H, Shojania K. Root cause analysis. In: Shojania K, Duncan B, McDonald KM, et al., editors. Making health care safer: a critical analysis of patient safety practices. Evidence Report/Technology Assessment No. 43. Rockville, MD: AHRQ; 2001. AHRQ Publication Number: 01–058. [PMC free article: PMC4781305] [PubMed: 11510252]

70.Bagian JP, Gosbee J, Lee CZ, et al. The Veterans Affairs root cause analysis system in action. Jt Comm J Qual Improv. 2002;28(10):531–45. [PubMed: 12369156]

71.Leape LL. Error in medicine. JAMA. 1994;272:1851–7. [PubMed: 7503827]

72.Institute of Medicine. Keeping Patients Safe: Transforming the Work Environment of Nurses. Washington, DC: National Academy Press; 2004.

73.Benner P, Sheets V, Uris P, et al. Individual, practice, and system causes of errors in nursing: a taxonomy. JONA. 2002;32(10):509–23. [PubMed: 12394596]

74.Spath PL, Hickey P. Home study programme: using failure mode and effects analysis to improve patient safety. AORN J. 2003;78:16–21. [PubMed: 12885066]

75.Croteau RJ, Schyve PM. Proactively error-proofing health care processes. In: Spath PL, editor. Error reduction in health care: a systems approach to improving patient safety. Chicago, IL: AHA Press; 2000. pp. 179–98.

76.Williams E, Talley R. The use of failure mode effect and criticality analysis in a medication error subcommittee. Hosp Pharm. 1994;29:331–6. 339. [PubMed: 10133462]

77.Reiling GJ, Knutzen BL, Stoecklein M. FMEA–the cure for medical errors. Qual Progress. 2003;36(8):67–71.

78.Adachi W, Lodolce AE. Use of failure mode and effects analysis in improving safety of IV drug administration. Am J Health Syst Pharm. 2005;62:917–20. [PubMed: 15851497]

79.DeRosier J, Stalhandske E, Bagin JP, et al. Using health care failure mode and effect analysis: the VA National Center for Patient Safety’s Prospective Risk Analysis System. J Qual Improv. 2002;28(5):248–67. [PubMed: 12053459]

80.Buhr GT, White HK. Management in the nursing home: a pilot study. J Am Med Dir Assoc. 2006;7:246–53. [PubMed: 16698513]

81.Guinane CS, Davis NH. The science of Six Sigma in hospitals. Am Heart Hosp J. 2004 Winter;:42–8. [PubMed: 15604839]

82.Mills PD, Neily J, Luan D, et al. Using aggregate root cause analysis to reduce falls and related injuries. Jt Comm J Qual Patient Saf. 2005;31(1):21–31. [PubMed: 15691207]

83.Pronovost PJ, Morlock L, Davis RO, et al. Using online and offline change models to improve ICU access and revenues. J Qual Improv. 2000;26(1):5–17. [PubMed: 10677818]

84.Thompson J, Wieck KL, Warner A. What perioperative and emerging workforce nurses want in a manager. AORN J. 2003;78(2):246–9. 258. passium. [PubMed: 12940425]

85.Willeumier D. Advocate health care: a systemwide approach to quality and safety. Jt Comm J Qual Patient Saf. 2004;30(10):559–66. [PubMed: 15518360]

86.Leape LL, Rogers G, Hanna D, et al. Developing and implementing new safe practices: voluntary adoption through statewide collaboratives. Qual Saf Health Care. 2006;15:289–95. [PMC free article: PMC2564013] [PubMed: 16885255]

87.Smith DS, Haig K. Reduction of adverse drug events and medication errors in a community hospital setting. Nurs Clin North Am. 2005;40(1):25–32. [PubMed: 15733944]

88.Jimmerson C, Weber D, Sobek DK. Reducing waste and errors: piloting lean principles at Intermountain Healthcare. J Qual Patient Saf. 2005;31(5):249–57. [PubMed: 15960015]

89.Docimo AB, Pronovost PJ, Davis RO, et al. Using the online and offline change model to improve efficiency for fast-track patients in an emergency department. J Qual Improv. 2000;26(9):503–14. [PubMed: 10983291]

90.Gowdy M, Godfrey S. Using tools to assess and prevent inpatient falls. Jt Comm J Qual Patient Saf. 2003;29(7):363–8. [PubMed: 12856558]

91.Germaine J. Six Sigma plan delivers stellar results. Mater Manag Health Care. 2007:20–6. [PubMed: 17506407]

92.Semple D, Dalessio L. Improving telemetry alarm response to noncritical alarms using a failure mode and effects analysis. J Healthc Qual. 2004;26(5):Web Exclusive: W5-13–W5-19.

93.Erdek MA, Pronovost PJ. Improving assessment and treatment of pain in the critically ill. Int J Qual Health Care. 2004;16(1):59–64. [PubMed: 15020561]

94.Burgmeier J. Failure mode and effect analysis: an application in reducing risk in blood transfusion. J Qual Improv. 2002;28(6):331–9. [PubMed: 12066625]

95.Mutter M. One hospital’s journey toward reducing medication errors. Jt Comm J Qual Patient Saf. 2003;29(6):279–88. [PubMed: 14564746]

96.Rex JH, Turnbull JE, Allen SJ, et al. Systematic root cause analysis of adverse drug events in a tertiary referral hospital. J Qual Improv. 2000;26(10):563–75. [PubMed: 11042820]

97.Bolch D, Johnston JB, Giles LC, et al. Hospital to home: an integrated approach to discharge planning in a rural South Australian town. Aust J Rural Health. 2005;13:91–6. [PubMed: 15804332]

98.Horbar JD, Plsek PE, Leahy K. NIC/Q 2000: establishing habits for improvement in neonatal intensive care units. Pediatrics. 2003;111:d397–410. [PubMed: 12671159]

99.Singh R, Singh A, Servoss JT, et al. Prioritizing threats to patient safety in rural primary care. Inform Prim Care. 2007;15(4):221–9.

100.Dunbar AE, Sharek PJ, Mickas NA, et al. Implementation and case-study results of potentially better practices to improve pain management of neonates. Pediatrics. 2006;118(Supplement 2):S87–94. [PubMed: 17079628]

101.Weir VL. Best-practice protocols: preventing adverse drug events. Nurs Manage. 2005;36(9):24–30. [PubMed: 16155492]

102.Plews-Ogan ML, Nadkarni MM, Forren S, et al. Patient safety in the ambulatory setting. A clinician-based approach. J Gen Intern Med. 2004;19(7):719–25. [PMC free article: PMC1492477] [PubMed: 15209584]

103.Baird RW. Quality improvement efforts in the intensive care unit: development of a new heparin protocol. BUMC Proceedings. 2001;14:294–6. [PMC free article: PMC1305833] [PubMed: 16369633]

104.Luther KM, Maguire L, Mazabob J, et al. Engaging nurses in patient safety. Crit Care Nurs Clin N Am. 2002;14(4):341–6. [PubMed: 12400624]

105.Middleton S, Chapman B, Griffiths R, et al. Reviewing recommendations of root cause analyses. Aust Health Rev. 2007;31(2):288–95. [PubMed: 17470051]

106.Farbstein K, Clough J. Improving medication safety across a multihospital system. J Qual Improv. 2001;27(3):123–37. [PubMed: 11242719]

107.Esmail R, Cummings C, Dersch D, et al. Using healthcare failure mode and effect analysis tool to review the process of ordering and administrating potassium chloride and potassium phosphate. Healthc Q. 2005;8:73–80. [PubMed: 16334076]

108.van Tilburg CM, Liestikow IP, Rademaker CMA, et al. Health care failure mode and effect analysis: a useful proactive risk analysis in a pediatric oncology ward. Qual Saf Health Care. 2006;15:58–64. [PMC free article: PMC2564000] [PubMed: 16456212]

109.Eisenberg P, Painer JD. Intravascular therapy process improvement in a multihospital system: don’t get stuck with substandard care. Clin Nurse Spec. 2002:182–6. [PubMed: 12172487]

110.Singh R, Servoss T, Kalsman M, et al. Estimating impacts on safety caused by the introduction of electronic medical records in primary care. Inform Prim Care. 2004;12:235–41. [PubMed: 15808025]

111.Papastrat K, Wallace S. Teaching baccalaureate nursing students to prevent medication errors using a problem-based learning approach. J Nurs Educ. 2003;42(10):459–64. [PubMed: 14577733]

112.Berwick DM. Continuous improvement as an ideal in health care. N Engl J Med. 1989;320(1):53–6. [PubMed: 2909878]

113.Pexton C, Young D. Reducing surgical site infections through Six Sigma and change management. Patient Safety Qual Healthc [e-Newsletter]. 2004. [Accessed November 14, 2007]. Available at: www​.psqh.com/julsep04/pextonyoung.html.

114.Salvador A, Davies B, Fung KFK, et al. Program evaluation of hospital-based antenatal home care for high-risk women. Hosp Q. 2003;6(3):67–73. [PubMed: 12846147]

115.Apkon M, Leonard J, Probst L, et al. Design of a safer approach to intravenous drug infusions: failure mode and effects analysis. Qual Saf Health Care. 2004;13:265–71. [PMC free article: PMC1743853] [PubMed: 15289629]

116.Kim GR, Chen AR, Arceci RJ, et al. Computerized order entry and failure modes and effects analysis. Arch Pediatr Adolesc Med. 2006;160:495–8. [PubMed: 16651491]

117.Horner JK, Hanson LC, Wood D, et al. Using quality improvement to address pain management practices in nursing homes. J Pain Symptom Manage. 2005;30(3):271–7. [PubMed: 16183011]

118.van Tiel FH, Elenbaas TW, Voskuilen BM, et al. Plan-do-study-act cycles as an instrument for improvement of compliance with infection control measures in care of patients after cardiothoracic surgery. J Hosp Infect. 2006;62:64–70. [PubMed: 16309783]

119.Dodds S, Chamberlain C, Williamson GR, et al. Modernising chronic obstructive pulmonary disease admissions to improve patient care: local outcomes from implementing the Ideal Design of Emergency Access project. Accid Emerg Nurs. 2006 Jul;14(3):141–7. [PubMed: 16762552]

120.Warburton RN, Parke B, Church W, et al. Identification of seniors at risk: process evaluation of a screening and referral program for patients aged > 75 in a community hospital emergency department. Int J Health Care Qual Assur. 2004;17(6):339–48. [PubMed: 15552389]

121.Nowinski CV, Mullner RM. Patient safety: solutions in managed care organizations? Q Manage Health Care. 2006;15(3):130–6. [PubMed: 16849984]

122.Wojciechowski E, Cichowski K. A case review: designing a new patient education system. The Internet J Adv Nurs Practice. 2007;8(2)

123.Gering J, Schmitt B, Coe A, et al. Taking a patient safety approach to an integration of two hospitals. Jt Comm J Qual Patient Saf. 2005;31(5):258–66. [PubMed: 15960016]

124.Day S, Dalto J, Fox J, et al. Failure mode and effects analysis as a performance improvement tool in trauma. J Trauma Nurs. 2006;13(3):111–7. [PubMed: 17052091]

125.Johnson T, Currie G, Keill P, et al. New York-Presbyterian hospital: translating innovation into practice. Jt Comm J Qual Patient Saf. 2005;31(10):554–60. [PubMed: 16294667]

126.Aldarrab A. Application of lean Six Sigma for patients presenting with ST-elevation myocardial infarction: the Hamilton Health Sciences experience. Healthc Q. 2006;9(1):56–60. [PubMed: 16548435]

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *