Consider the following scenario: Every day before work, you stop at a neighborhood coffee shop and order a cup of coffee with cream and no sugar. This scenario repeats itself for years without incident. Then one day, after you drive away with your coffee, you notice that the workers forgot the cream and put in sugar instead. The next day you mention the mistake, and the coffee shop manager apologizes, so you pick up the coffee as usual. Two weeks later the same mistake happens: sugar, no cream. You are mildly annoyed and express your dissatisfaction, so the manager apologizes again and refunds your money. All is well for a month, and then it happens again. If you are like most people, you would probably start looking for a new coffee outlet. The reliability of your former coffee shop is no longer high enough to justify patronizing it.
Reliability has been defined as “the capability of a process, procedure or health service to perform its intended function in the required time under existing conditions.”1 In other words, how often does something (like getting cream in your coffee) happen when it is supposed to happen? The science of reliability has developed out of industries such as nuclear power and aviation, which place great emphasis on reducing error rates to protect the public. It has been applied to health care only recently but is quickly being recognized as one of the underpinnings of quality improvement.
Reliability can be quantified as the number of actions that achieve the intended result divided by the total number of actions taken. Using our coffee example, say you have purchased 1,000 cups of coffee over the past four years. In three instances, you did not receive your expected outcome, so the reliability would be (1,000 − 3)/1,000 = 0.997, or a rate of 99.7 percent. The defect rate, or unreliability, equals 1 minus the reliability (1 − 0.997 = 0.003 or 0.3 percent).
The defect rate is often expressed as an index based on an order of magnitude. For example, 10–1 means that approximately 1 time in 10, the action fails to achieve the intended result. In our coffee example, the defect rate would be in the range of 10–3, which might be tolerable in this situation but tragic in high-stakes situations, such as commercial airline flight or heart surgery. The table shows a variety of defect rates and how to express them. This nomenclature is useful in health care as we try to quantify our error rate for common procedures or actions we carry out on a daily basis.
While the health care industry achieves high rates of reliability in certain areas, research suggests that our overall reliability rate is not what it should be. In a recent study,2 researchers surveyed patients in 12 major metropolitan areas across the United States to see whether doctors were complying with well-accepted indicators of quality of care. They systematically reviewed charts and did follow-up studies aimed at measuring performance on 439 indicators of care for 30 major medical conditions, both acute and chronic, as well as preventive care measures.
Their sobering overall conclusion was that only about 55 percent of the time were patients receiving the care that evidence-based medicine leads us to believe is quality care. In other words, the likelihood that our patients are receiving care that meets the commonly accepted standard we expose in our literature is roughly equal to the flip of a coin. We have an across-the-board defect rate no better than 10–1. Not all of these defects are life threatening, but they leave much room for improvement.
THE RELIABILITY SPECTRUM
The following table shows a range of reliability rates, along with their corresponding defect rates. An activity with a 10 percent (10–1) defect rate would be considered unreliable, while an activity with a 0.0001 percent (10–6) defect rate would be considered extremely reliable.
|Reliability||Defect rate expressed as …||Examples|
|A percentage||A power of 10|
|0.9||10 percent||10–1||Failure to prescribe beta blockers post-myocardial infarction|
|0.99||1 percent||10–2||Adverse events in hospitals|
|0.999||0.1 percent||10–3||General surgery deaths|
|0.9999||0.01 percent||10–4||Deaths in routine anesthesia|
|0.99999||0.001 percent||10–5||Giving the wrong blood to a patient during a transfusion|
How can reliability science help?
Currently we achieve our 10–1 defect rate in health care through a combination of vigilance, good intent and hard work.3 This is slightly better than a system in chaos. To raise our level of performance, we need to design and implement processes that make it easy to do the right thing at the right time.
Experts in reliability science and human factors, such as French scientist René Amalberti, have found that “unconstrained” human performance, guided solely by personal discretion, results in defect rates worse than 10–2.1 To achieve better than this, professionals must add a measure of discipline or “constrained” human performance.
Many tactics exist for achieving this,4 including the following:
1. Standardize your approach. Having a well-defined process for each activity in your practice (from rooming patients to conducting diabetes check-ups) helps ensure that the proper steps are taken each time, regardless of who is doing the work. Where possible, processes should be standardized around evidenced-based quality indicators. This is especially important in chronic disease care. For example, a group of physicians could review widely accepted guidelines for the treatment of hypertension (such as those from the Seventh Report of the Joint National Committee on Prevention, Detection, Evaluation and Treatment of High Blood Pressure; http://www.nhlbi.nih.gov/guidelines/hypertension/jnc7card.htm) and translate them into protocols or standing order sets.
Standardization may also involve arranging your exam rooms in the same way so that physicians and nurses can easily locate the forms and supplies they need at the moment they need them.
2. Build decision aids and reminders into your systems. Checklists, flowsheets and other tools can help prompt physicians and staff to follow the standardized processes that have been developed. These tools are particularly effective if they are built into a practice’s systems and become automatic. For example, imagine seeing a patient with diabetes in your office, and as you open his or her chart on the computer it automatically reminds you to draw certain overdue laboratory tests. Such technology is available. Simpler reminder systems may involve measures such as posting a sign in your exam room that prompts all patients with diabetes to remove their shoes as a reminder to you to conduct needed foot exams.
3. Take advantage of pre-existing habits and patterns. Not everything we do needs to be discarded, only those things that don’t consistently lead to the desired level of reliability. Where possible, use established behaviors within your group to improve reliability, as it will require less training and be more effective. For example, if your nursing staff is in the habit of attaching a diabetes encounter form to the chart of any patient being seen for a diabetes checkup, take advantage of that habit and extend it to other chronic diseases.
4. Make the desired action the default, rather than the exception. For example, in our clinic we routinely take blood pressure measurements on all adult patients who come in, no matter the reason for the visit. By applying this across the board and not leaving it to the discretion of the physician, we regularly achieve greater than 98 percent reliability.
5. Create redundancy. Redundancy may sound like something you would want to avoid, but when used strategically, it can act as a filter to decrease errors. For example, to avoid dosage errors, particularly in young patients, it’s important to have an accurate record of their weight. To avoid incorrect measurements, the nurse can state the patient’s weight as he or she records it and then ask the patient (or parent), “Is that what you expected?” By using the patient as a redundancy check, the nurse can simply and inexpensively prevent common errors.
6. Bundle related tasks. “Bundling” involves taking a series of tasks that are closely related in time and space and packaging them together into one overall job to complete. For instance, the Institute for Healthcare Improvement promotes a respiratory bundle as a protocol for all patients on a respirator. Items in this bundle include raising the head of the bed to 30 degrees, daily interruption of sedative infusions, deep vein thrombosis prophylaxis, stress ulcer prophylaxis, antimicrobial hand foam available at the bedside and respiratory therapy assessment for weaning readiness. These grouped procedures have achieved superior outcomes and decreased morbidity from ventilator associated pneumonia and total time on a respirator.5
In our practice, we have trained our nurses to “bundle” together immunizations for children at certain ages so that all the recommended shots are given by the age of two. Group visits for outpatient care of chronic diseases is another example of bundling like tasks to improve the quality of care.
7. Encourage teamwork, feedback and training. While systems are important in the pursuit of reliability, people and how they interact are highly important as well. A practice’s leadership must work to develop cooperative relationships among staff, offer feedback on performance and provide the necessary training for workers.
WHERE TO BEGIN
To increase the reliability of your practice, begin with just one or two key processes and apply the principles described in the article. Potential areas of improvement include the following:
Providing recommended care for specific chronic diseases (e.g., more than three glycosylated hemoglobin tests every two years for patients with diabetes),
Prescribing beta-blockers for patients following acute myocardial infarction,
Providing specific preventive services at the recommended times,
Avoiding harmful drug interactions,
Notifying patients of test results,
Seeing patients at their scheduled time,
Reducing no-show rates,
Reducing referral difficulties.
Prevent, identify, mitigate
A three-step design has been developed to help improve patient safety in our workplaces.4 It requires that we, first, prevent failure by redesigning our systems, following the principles described above. Currently, most of our systems are designed with provider convenience, cost or other factors in mind and with little attention paid to consciously preventing failure.
Next, we must identify errors as they happen so that we can address them efficiently and quickly. Common reasons for errors include operator fatigue, environmental conditions and distractions, poor task design, psychological conditions and competing demands. In many organizations, the tendency is to cover up errors out of fear of retribution, or to work around them. The better approach would be to design procedures that make failures visible as they happen so that they can be intercepted before causing harm.
Electronic systems can assist in this. For example, some systems will give the physician a warning of potential incompatibility when a new drug is prescribed. Identifying such errors as they happen allows physicians the chance to correct them before any harm is done.
Finally, we must understand that failures will occur and, therefore, we must attempt to mitigate harm in the event of a failure. For example, in a redesigned process at one hospital,6 if a radiologist doing a quality control check found that an emergency physician had misinterpreted a radiograph, the mitigation step was simply to call the patient and ask him or her to return to the emergency department for treatment. Similar mitigation steps can be developed for office-based processes.
Preserving personalized care
Physicians may be concerned that the brand of medicine outlined above would become “cookbook” and reduce the art of medicine. The craftsmanship and individuality that compel us to go the extra mile for our patients can, however, be reconciled with order sheets and engineered default options.
William Miller and colleagues7 have described the coexistence of standardization and variation as “practice jazz.” They decry “inflexible standardization” that is “poorly responsive to the needs of different practices’ diverse agents and to the almost constant situations of uncertainty, contextual uniqueness and surprise that occur in the practices.” They go on to say “it is critical to differentiate the variations that are sources of error from the variations due to the dynamics of relationships.”
With that understanding, we can use reliability science to form the infrastructure of our quality improvement efforts and to foster superior, evidenced-based care that meets the varied psychosocial needs of our patients. And at the end of the day, we can get a cup of good coffee while we enjoy the jazz.