Clinical policies are receiving a lot of attention these days for their ability to help health care providers and organizations standardize care according to “best practices.” These sets of recommendations for the care of patients with specific conditions can give doctors an evidence-based approach to making common clinical decisions — or at least the best ones do. As group practices, physician networks, hospitals and health care plans increasingly work toward adopting clinical policies, the doctors who are involved in that process need to be able to separate the wheat from the chaff. And once you've determined that a particular policy ranks above others for patients with a particular condition, you need to be able to move the policy off the shelf and into patient encounters.
Finding good clinical policies is only a first step; to improve care, they must be evaluated and implemented skillfully.
The most critical part of evaluating a clinical policy is examining the evidence that supports it.
Also consider a policy's relevance, its specificity, its target population, its readiness for implementation and any inherent biases.
Before implementation, build consensus for the policy, adapt it to meet local needs, plan for its evaluation, pilot test it and revise it.
In the February issue of FPM, we offered suggestions about the best sources of clinical policies for family practice (“Where to Look for Good Clinical Policies”). Now we'll provide the next steps in the process of making clinical policies part of practice: evaluation and implementation. Following these frameworks can make the difference between whether a clinical policy actually improves care and reduces its cost or simply joins the ranks of other good intentions.
Evaluating a clinical policy
Implementing a clinical policy has the potential to change physicians' practice patterns significantly, so critical evaluation of competing policies is a step you really can't afford to slight. That being said, you can narrow the competition by looking to high-quality sources, including many of those we discussed in last month's issue. The AAFP, for example, has used a rigorously evidence-based approach in developing its three policies. The 19 clinical policies developed by the government's Agency for Health Care Policy and Research (AHCPR) also are widely considered to be trustworthy, though in need of modification to meet local needs effectively. So too are the recommendations of the United States Preventive Services Task Force, which appear in the Guide to Clinical Preventive Services.
One helpful policy source also enables you to do at least a quick evaluation of the policies you're considering. The AHCPR's online National Guideline Clearinghouse lets users compare policies based on more than 20 criteria, including what clinical outcomes the developers considered, how the developers collected evidence in support of the policy and how they assessed the quality and strength of that evidence.
Still, a committee considering whether to adopt given clinical policies would be well-advised to evaluate them in a rigorous and standardized way, much in the same spirit that the best policies themselves reflect. We recommend a framework for evaluation that judges a clinical policy on six criteria: its relevance to family medicine, the specificity of the target condition and recommended interventions, the definition of the target population, the quality of the evidence on which the policy is based, the biases the policy reflects and its readiness for implementation.
1. Is it relevant to family practice?
The starting point for your evaluation of a clinical policy is its relevance for you and your colleagues. Your answers to five questions will help you in your judgment:
Does the policy address a problem of significance for family physicians and their patients?
Does the policy address the problem as family physicians see it?
Does the policy include interventions that family physicians normally perform or could perform?
Does the policy aim to improve clinical outcomes?
Does the policy support (not substitute for) a family physician's judgment?
2. Are the condition and interventions specific?
Clinical policies address particular decisions involved in caring for patients with certain problems or conditions. Is the target condition well-defined in the policy you're considering? Are the policy's recommended interventions stated clearly and specifically? Are alternative interventions also stated clearly? Does the policy address all the alternative interventions of interest to family physicians?
3. Is the target population well-defined?
Clinical policies also relate to specific populations, which might be defined by age, gender or other characteristics. In the policy you're evaluating, is the target population, including its limitations and exclusions, clear? Although you must know precisely for whom the policy was written to be able to understand it well, the best policies can also be applied, perhaps with modifications, to other populations. Will this be possible, or does the target population limit the policy's generalizability?
4. How good is the policy's evidence?
Perhaps the most important criteria on which to judge a clinical policy are the quality of the evidence supporting it and the rigor of the developers' review of that evidence. Most of the clinical policies published in recent years are not based on evidence but rather on the opinions of groups of “experts.” Expert opinion is subject to all sorts of biases, however, and clinical policies based on anything less than a good review of existing evidence should be approached with considerable caution.
The methods policy developers use to assemble, summarize and present their evidence are central to the quality of a proposed clinical policy. Appropriate questions about the developers' evidence include these:
Was the literature review restricted to randomized, controlled clinical trials, or did it include a broader selection of research designs? The family medicine or primary care literature on many topics includes a wide variety of designs, and restricting a literature review to controlled trials may exclude some evidence derived from primary care sources. On the other hand, the controlled trial is the gold standard against which other designs are judged, and an evidence review restricted to trials could be considered state of the art.
Did the literature review involve several different search systems, or did it depend on a single search system like MEDLINE? Did it involve searching the bibliographies of key articles? Did it use other sources of previously assembled studies, such as the Cochrane Collaboration's annotated collections of clinical trials? Did it cover a sufficiently wide time span?
What is the quality of the developers' evidence review? Does it include a detailed critique of certain key papers or the relevant literature in general? Does it comment on the methods generally used in research on the topic, such as the usual definitions of the condition, usual outcome measures or attempts to standardize research methods? How does the review summarize the results of the reviewed research? Does it include meta-analyses where applicable or indicate why meta-analyses aren't included?
Does the evidence review address the strength of the evidence? Two issues are embedded in the question of strength: effect size and strength of the literature. Effect size is the magnitude of the effect of the evaluated intervention on patients' outcomes. The strength of the literature may be evaluated by the number of papers from which the effect-size measure is drawn or the number of papers that agree or disagree with the effect-size measure.
Does the evidence review include a measure of patients' preferences? If so, how were those preferences assessed?
Does the review include a measure of the costs or cost-effectiveness of the interventions? If so, which costs are included? Are they accurately assessed? Are the technical issues of discounting, accounting for the future value of money after inflation, and indirect costs handled well?
Finally, if another group started out with the same question, do you think it would find the same evidence?
No review of available literature will be perfect, and you should trust your own reactions to the review. However, if the methods by which the evidence was assembled were adequate, then your answers to nearly all of these questions should give you confidence about the clinical policy.
Clinical-policy developers often present their evidence in two related forms: evidence tables and balance sheets. Evidence tables summarize the evidence. An individual table might present the evidence for a particular intervention, and a set of tables might be needed to present all the evidence for all the interventions being considered. Balance sheets summarize the policy's evidence tables, synthesizing the key points of evidence in a final summary (see “Evidence tables and balance sheets”).
As a critic of a clinical policy, look at the completeness of the evidence tables. Do they provide the information you want to see in support of the policy? What questions are not answered by the evidence? Are questions unanswered because the evidence is simply insufficient, or did the policy's developers fail to seek out relevant evidence? Are the balance sheets clear, concise and understandable? Do they summarize the evidence fairly? Are all the important outcomes presented? Do they describe the magnitudes of the outcomes and the ranges of uncertainty in these measures? Are they useful to you as you consider adopting the clinical policy?
Evidence tables and balance sheets
An evidence table in a clinical policy typically includes a line summarizing the data reported in each significant paper supporting the policy. Important information might include the type of study, the size of the study population, the interventions compared, key factors affecting validity, the study's outcome and the effect size of the intervention.
A policy's balance sheet, which summarizes the evidence tables, lists the benefits and risks of each intervention (an example is partially reproduced here). It might quantify likely outcomes by citing the probability or the magnitude of benefits and risks. It might also include the cost of the interventions.
|Direct treatment outcomes
|1. Chance for improvement of symptoms (90% confidence interval)
|2. Degree of symptom improvement (percent reduction in symptom score)
|3. Morbidity/complications associated with surgical or medical treatment (90% confidence interval), about 20% of all complications assumed to be significant
|1–5% complications from BPH progression
|4. Chance of dying within 30–90 days of treatment (90% confidence interval)
|0.72–9.78% (high-risk/elderly patients)
|0.8% chance of death ≤90 days for 67-year-old man
|5. Risk of total urinary incontinence (90% confidence interval)
|Incontinence associated with aging
5. What biases does the policy reflect?
Clinical policies aren't brought down from Mount Sinai; they're written by panels or committees of fallible people who have some relevant expertise. So that you can understand the biases that underlie a policy before you adopt it, you should evaluate the makeup of the panel, how it reached its decisions (particularly when evidence was lacking) and who sponsored its work.
Panels typically include generalist physicians (general internists, pediatricians or family physicians) and specialists in relevant areas. But because panels must include a variety of specialists as well as people with expertise in literature searching and statistics, generalist physicians usually are in the minority. For any given policy, ask yourself whether the panel included people with the appropriate expertise and a reasonable balance of generalists and specialists. Many specialists see patients who differ considerably from the patients seen in primary care. For example, the depressed patients psychiatrists treat are much more likely to be severely ill, need hospitalization and meet the full criteria for major depressive disorder. So a clinical policy for depression written by a group of psychiatrists will focus on the types of patients they see and may have limited usefulness for family physicians.
One of the most difficult issues for any clinical policy panel is that, in some part of its work, evidence is usually lacking or conflicting. When you evaluate a policy, it's crucial to understand how the panel reached its decisions when opinion rather than data was the primary influence. Panels generally either reach consensus or let the majority rule on such questions. Some panels publish minority reports. Some resolve disagreements by considering patient preferences; in this situation, you should know how the patient-preference data were collected and analyzed. When panel members disagree, you may find it instructive to determine on which side of the issue the family doctors and other generalists stood.
The sponsoring agency is another potential source of bias. For example, specialty societies may sponsor clinical policies that benefit their members in subtle ways. Pharmaceutical companies may sponsor policies that help sell their products or that advocate drugs in general as solutions to problems that could be managed with other interventions. Sponsorship may be disguised or de-emphasized in written policies. As is the case with any medical literature, the sources of support for the policy may speak volumes about bias.
6. Is the policy ready to implement?
A final criterion for evaluation is how well the clinical policy's authors prepared it for implementation. Here are some questions to consider:
Has the policy been pilot tested? If so, in what settings and with what populations? How is the test setting like and unlike your setting?
Has the policy undergone a clinical trial or other formal evaluation? If so, what were the results?
Has the policy been developed with computerization in mind, and (if your practice uses computerized patient records) could your computer support staff program your system to include the policy? Would it add significantly to your system's ability to support care?
How big an administrative burden would implementing the policy impose on your practice or organization?
A framework for implementation
As difficult as it is to write a good policy, it's much harder to get the policy adopted widely. We suspect the process is analogous to other sorts of behavior change. One way to conceptualize behavior change is the readiness-for-change model, which includes stages of precontemplation, contemplation, preparation, initiation and maintenance. It makes sense that people are more likely to adopt change when they have thought about and prepared for it.
In the discussion that follows, we assume that your panel has decided to adopt a particular clinical policy for your practice, hospital or other organization. We also assume that your choice of the policy was based on a valid process of setting priorities for your organization, perhaps related to the prevalence of the problem locally. Finally, we assume that most of the people who will be affected by the policy agree with the decision to adopt it.
Here is our best estimation of the right implementation process to follow, as well as a caveat: The process is based on our experience, but it hasn't been subjected to rigorous testing.
1. Assemble a team and build consensus
It's important to involve all stakeholders, all people with a special interest in the problem or its solution, from the outset. Doing so helps prevent later disagreements and gives the key individuals a chance to develop a sense of ownership of the policy as it's refined and implemented.
The best implementation teams are made up of doctors, administrators (in large organizations), staff members and patients. It's important to remember that the stakeholders who have interest in the problem include people who actually have the problem. You may want your team to include patients whose experiences cover the broad scope of the policy. For example, a team working on a mammography policy might include asymptomatic women as well as women ages 35 to 50 who have had abnormal mammograms. In many respects, patients are the most important stakeholders, and including them from the outset provides an automatic way to build their preferences into the policy's implementation.
Any member of the team (other than a patient) can serve as its leader. The right choice depends on the focus of the policy and the interpersonal environment within the practice or organization. To implement a given policy in a given setting, a doctor may be the best leader; in another situation, an administrator or a nurse might be a better choice.
2. Document your clinical processes
Before you start to change how you do something, it's best to understand in detail how you do it now. In particular, you need to note in writing all the facets of the current care process that would be changed with adoption of the new clinical policy. In doing so, you will find elements of that process you didn't fully understand and others that were completely hidden from view. Until you understand the original process fully, you will have difficulty changing it.
3. Modify the policy for local use
Because the clinical policy you've adopted will be generic, it will serve your practice or organization better if you adapt it to local conditions. In writing the local version of the policy, focus on complementing physician judgment and improving care. Avoid language that sounds punitive to any stakeholder.
Develop a standard format for all the forms and other materials that doctors and staff members will use as they put the policy into action. For example, creating flow sheets for the medical record may be very helpful.
4. Communicate changes in processes
All policy-related changes in the way you've been providing care must be made clear to those who will be implementing them, particularly to doctors.
As your team develops new processes and procedures, remember what works and what doesn't in changing physician behavior. Overall, a combination of interventions to reinforce the change will be more effective than a single tactic. The most effective interventions are reminder systems and similar modifications to office procedures, as well as academic detailing (academic detailing involves a peer educating physicians and other providers about new policies or procedures; it's modeled on the activities of commercial pharmaceutical “detail reps”). Financial incentives also work to effect change. Chart audits and feedback are somewhat effective at changing physician behavior, particularly if they are concurrent, targeted to specific providers and delivered by peers or opinion leaders. Traditional tactics such as didactic presentations, continuing medical education and mailings are the least effective.
5. Decide how to evaluate the policy
Just as you evaluated the clinical policy, you need to evaluate your implementation of it. Establish benchmarks and key measures of quality. Identify both intermediate and ultimate goals for your implementation effort. Plan an evaluation program that includes realistic deadlines; don't overestimate your system's ability to make a complex change rapidly.
6. Test the changes or interventions
A lesson learned from complex research projects is that every change benefits from pilot testing. Patients, staff and physicians will identify obstacles to implementing the clinical policy that the team couldn't have anticipated. Make indicated changes in the policy, and test it again — and again. With several rounds of pilot testing, you can identify and solve the vast majority of implementation problems.
7. Adopt the revised clinical policy
In one sense, adoption of the policy is a onetime announcement, following pilot testing. But in a larger sense, it's a process that each physician and staff member will go through at a slightly different pace. Each individual first becomes aware of the policy, then develops intellectual agreement with it, then makes a personal decision to adopt it and ultimately adheres to it at the appropriate times. So don't expect formal adoption to mean actual adoption immediately.
8. Evaluate the new policy
Based on the plan your team developed earlier, evaluate the effects of the clinical policy on your practice or organization. Use the data from your evaluation to fine-tune the policy; resist the assumption that a policy, once adopted, is set in stone.
Don't shoot for the stars
It's important to have realistic expectations about making clinical policies part of your practice. No single policy is likely to result in shocking improvements in your clinical or financial performance. By the same token, overloading your practice or organization with a number of new clinical policies at once is likely to stress both your systems and your people beyond what they can bear. Incremental changes may not be flashy, but they're the best hope we have of making solid improvements in care — and of reaping the benefits of those improvements for the long run.