Because of the nature of a doctor's work, self-evaluation can provide insights that performance evaluation generally doesn't offer.
Fam Pract Manag. 1998;5(5):22-34
Of a physician manager's many responsibilities, monitoring and changing physician behavior — in other words, evaluating doctors' performance — is one of the most important and most complex. How does one track and measure changes in physician behavior and the effects they have on the practice of medicine? The process doesn't lend itself easily to statistical analysis, and day-to-day observation of a doctor's practice isn't practical. Physician performance evaluation is often mentioned in lectures and articles dealing with managed care, physician compensation and the formation of physician organizations — yet it's rarely described in detail. In fact, very little published literature directly addresses the process, particularly in the journals physicians typically review.
Because of the scarcity of external resources, I developed a performance evaluation process for the seven primary care physicians and three nurse practitioners (NPs) in our group practice, which is owned by a nonprofit health system. Our need for an evaluation process was both great and immediate for reasons related to our past, present and future.
The practice has changed considerably in the last 10 years, from a walk-in clinic to a full-service primary care practice that participates extensively in managed care and provides inpatient care. In addition, the physicians and NPs now are salaried. Despite these changes, our practice had never done any systematic performance evaluation in its 20-year history. Only in the last year has there been an incentive component to physician compensation based on productivity and other performance criteria.
Our practice also faces operational issues. Morale has suffered in the past two years because of the health system's financial constraints, which have forced staff cutbacks and delayed needed operational improvements and equipment purchases. The possible acquisition of the health system and its affiliated practices (including ours) by a for-profit health care company has created uncertainty for our patients. Capitation and risk contracting have arrived in Massachusetts, but many unresolved issues remain about how salaried physicians should fit into the physician organizations formed in response to these new methods of financing health care.
My goals for developing a performance evaluation process — something every practice should have, even if isn't facing challenges like ours — were threefold:
To identify personal goals by which to measure individual doctors' performance and practice goals that could be used for strategic planning,
To motivate the group to deal with changes that will come as a result of the external and internal issues we face,
To unify the group through a shared experience.
I also felt a personal need to do this project: to build my own skills as a physician manager. I spent 11 years in solo practice before joining this group four years ago. With this background, evaluating and managing the behavior of other doctors clearly was my weakest area.
Developing an evaluation process
I administered a work-style assessment instrument1 (based on the Myers-Briggs Type Indicator) to all our physicians and NPs, as well as two administrators who have daily responsibility for the practice. Doing so helped me understand different providers' attitudes toward work and why I might react to a certain individual in a certain way. I felt I needed this understanding so I could be as objective as possible in evaluating other providers, and later analysis of the evaluation process showed this understanding was important. (For example, before this project, I often found myself overly critical of two colleagues, and the assessment results indicated that our work types might explain many of our differences. Now I try harder to look at things from their perspective.) The assessment also revealed variety in work styles within the clinical teams and especially within our three physician-NP pairings.
During a staff meeting, we reviewed the assessment results and used nominal group process to identify and prioritize goals for the practice. (Nominal group process involves brainstorming for important issues related to a given topic, prioritizing those issues individually, compiling the group members' priorities and using those results to prioritize the issues as a group.) Reviewing the assessment results helped us understand why some staff members' goals were fairly general and others' were more concrete. This goal-setting activity didn't relate directly to the staff's self-evaluations; it was intended to give the staff a shared experience and to encourage them to think about the bigger picture of the practice's success as they prepared to evaluate themselves. Finding that our group ranked quality of care, community benefit and financial success as our top three priorities reassured me that we were a group that could work together for change.
I reviewed the medical literature and was surprised at how little has been published about the design and implementation of physician performance evaluation systems. Most of the material in the past five years has appeared in American nursing journals. A few articles turned up in Canadian and British medical and nursing journals. One could almost conclude that performance evaluation for physicians must be a taboo topic, perhaps a legacy of the autonomy that doctors in this country have enjoyed in the past. Nevertheless, my research reinforced the need to develop a system, and the articles provided a starting point. In addition, I reviewed sample evaluation tools from the Academy's Fundamentals of Management program, our hospital's nursing department, my residency, a local business and a commercial software program.
Traditional performance evaluation entails an annual review by a supervisor, who uses an evaluation tool to rate individual performance in relation to a job description or other performance expectations. Organizational and personal goals form the basis of such a review. This technique has some inherent problems when the reviewer is less than objective.2 Applying this approach to the clinical practice of medicine, we find additional weaknesses. First-hand observations are impossible after residency because supervisors don't routinely observe physician-patient encounters. A supervisor would have to rely on second-hand information, which could include a disproportionate number of complaints by patients or staff. Complicating matters further, physicians' job descriptions are rarely specific enough to form the basis of measuring an individual's performance.
Newer approaches to evaluating physicians require an understanding of the principles of continuous quality improvement.2,3 When it follows these principles, performance evaluation becomes a collaborative effort among supervisors and employees to establish standards, define goals and solve problems that interfere with achieving those goals.
Although many approaches are possible, any evaluation should involve well-defined, written performance standards; an evaluation tool; and opportunity for review and feedback.4 The first of these elements is the most important. The performance standards should include a job description and defined expectations, such as targets for incentive-based compensation and established quality indicators or performance criteria. External sources of information, such as patient satisfaction surveys5,6 and utilization or outcomes data from managed care organizations, can be used to define performance standards — as long as the information is accurate. The evaluation tool may take a variety of formats depending on the performance criteria, but it must express results in an understandable way.
When this project began, our group had rudimentary productivity data, which was used in our incentive program, but this data was insufficient to form the basis of a performance standard. We hadn't yet begun to survey patient satisfaction. Our largest managed care plans provide profiling and utilization data for each provider, but it is based on claims and is too inaccurate and inconsistent to be useful.
Without established performance standards and with no model evaluation process to draw on, I decided to make self-evaluation the focus of our process. I felt this would let our providers establish baselines for themselves, and it would begin the process of establishing individual and group performance standards for the future. I also considered having office staff evaluate each provider but abandoned this as not being pertinent to my goals. Evaluation of each provider by all other providers was a possibility, but I deemed it too risky as an initial method because the providers wouldn't have had the benefit of the reading I had done. I did ask the members of our physician-NP teams to evaluate their partners. Because each team cares for a single panel of patients and works together closely, I felt their evaluations of each other would be useful.
I designed two evaluation tools. The first asked the doctors and NPs for open-ended responses to questions about several aspects of their work: professional development, relations with colleagues (those in the practice and those in other parts of the health system), efforts to achieve practice goals and operational improvements, other professional activities and barriers to satisfactory performance. (See “An open-ended self-evaluation.”) The form also asked, “Who are your customers?” to gauge our progress in focusing awareness on the importance of customer service in modern practice. In addition, the physicians and NPs were asked to list three goals for themselves and three goals for the practice. Finally, they were asked what they needed from the organization, and specifically from me as medical director, to help them succeed. The open-ended format was intended to encourage introspection and elicit detailed responses.
An open-ended self-evaluation
Here are the open-ended self-evaluation questions developed by Dr. Flood for his group practice in Foxboro, Mass.
Operational Improvement Topic for This Year's Evaluation
Who are your customers?
In the context of your role at the health center, what people would you define as your “customers”? How do you relate to them day to day? Do you relate to them differently over a longer period of time?
How did you address your customers' needs in the past year? How will that change in the coming year?
What activities have you undertaken for professional growth in the past year? Please list any organized seminars or self-study programs. Did you make other efforts to learn new skills or try new approaches to patient care? Were these activities in response to an assessment of what you needed, or were they just topics that interested you?
How do you get along with the staff at the health center? It may help to frame your response in terms of these staff groups: other doctors and nurse practitioners, nurses and medical assistants, clerical and support staff, and administrative staff. Is communication clear? Do people do what you expect? Do their expectations of you seem reasonable? If you can, please provide specific examples.
How do you get along with other colleagues in the health system? This could encompass many areas, including hospitals, the laboratory, other ancillary departments, other physician practices, etc. How much contact do you have with the various parts of the health system? Again, specific examples may be helpful to focus your reply.
Participation in practice goals and operational improvements
Over the past year, we have tried to address a number of operational and quality issues at the health center. What has your participation been in this process? Did you have input directly or through another? Were there people or resources that you thought would be helpful but couldn't access? Do you think there are other ways that you could participate in this process?
Other professional activities
What are your professional activities outside the health center? Have you gained skills or knowledge through outside activities that help you with your job here? How about hobbies or personal pursuits?
Are there barriers within the practice, or the health system as a whole, that complicate your work in any of the areas above? What would you be able to do if these barriers weren't present? Do they affect everyone in the same way or just apply to your situation?
Goals and Needs
Please think of at least three goals you would like to set for yourself for the next year. These should be relevant to your job performance or professional development. Ideally, they should be measurable and require some effort (“stretch”) on your part to achieve.
Please think of at least three goals for this practice or the health system for the coming year. Again, they should be relevant and measurable.
What do you need from this practice and from the health system? What could be done to help you better achieve the goals you mentioned above, as well as do your job better?
What can I do as medical director to help you perform your job and accomplish the goals you set?
The second tool was a checklist asking the providers to rate themselves on a five-point scale in each of eight areas — knowledge and skill in practice, dependability, patient relations, commitment to the organization, efficiency and organizational skills, overall quality, productivity and teamwork — and to identify a few personal strengths and weaknesses. (See “A self-evaluation checklist.”) For my own checklist as medical director, I added two more attributes: leadership and the ability to manage people.
A self-evaluation checklist
The practice's self-evaluation checklist asks providers to use a five-point scale to rate their performance in eight areas, and it asks two open-ended questions about individual strengths and weaknesses.
|Unsatisfactory||Needs improvement||Meets job requirements and expectations||Exceeds job requirements and expectations||Outstanding|
|Rate your level of skill and knowledge as it relates to your position. Take into account efforts to keep abreast of new developments and your appropriate use of resources.||1||2||3||4||5|
|Rate your level of dependability. Consider such things as your availability, punctuality and commitment to colleagues and staff.||1||2||3||4||5|
|Rate your skills in patient relations. Take into account the effectiveness of your communications, your courtesy and how promptly you respond to patient needs.||1||2||3||4||5|
|Rate your commitment to the organization. Consider this to mean the practice, its goals and procedures (not the health system as a whole).||1||2||3||4||5|
|Rate your efficiency and ability to organize your work. Take into account managing time, meeting objectives, prioritizing and integrating change.||1||2||3||4||5|
|Rate the level of overall quality you deliver to the workplace. Consider such attributes as thoroughness and accuracy, as well as efforts to implement quality improvement.||1||2||3||4||5|
|Rate your productivity.||1||2||3||4||5|
|Rate your level of teamwork. Take into account your contributions to a positive team spirit, openness to others' views and commitment to team success (as opposed to individual success).||1||2||3||4||5|
|Please mention a few specific positive attributes that you bring to your work.|
|Please mention one or two areas that might need improvement.|
Both tools were given to the providers with a cover letter about my Fundamentals of Management project and my goals for it. I explained that this was merely a first attempt to develop self-evaluation tools. (Although the other staff members didn't have direct input into developing the tools, I don't think it affected their willingness to take part in the process.) The physician-NP teams also received checklist evaluations to complete about each other. The providers were asked to complete the assessments confidentially and objectively and return them in two weeks (actually, they came in over two months). Before seeing any of the self-evaluations, I completed checklist evaluations for all the providers, and I did so over one weekend to improve the consistency of my responses.
I reviewed each provider's open-ended responses and summarized them in preparation for one-on-one meetings. With my summary, I also listed the provider's personal goals, practice goals, perceived barriers and needs. I compared each provider's checklist responses and total score with mine and, for the physician-NP teams, with those of each provider's partner. I also examined how many attributes had the same rating between observers (concordance) and how many had a higher or lower rating between observers (variance).
The comparisons were interesting. In seven out of nine cases, including all three NPs, the physicians' and NPs' self-evaluations were lower than my ratings of them. Likewise, in the three physician-NP pairings, all the providers rated their partners higher than themselves. This pattern implies a level of honesty suggesting that self-evaluation can produce valid information.
The degree of concordance was another matter. Concordance tended to be higher when the work-type assessment results were similar and lower when the work types were different. This held true for comparisons of my ratings with self-evaluations as well as for comparisons of self-evaluations and ratings by partners in physician-NP teams.
I then met for about 30 minutes with each provider to review his or her evaluations and productivity data. (The available productivity data was a summary of each physician's or NP's contribution to our quarterly total RVU values of billed services, comparing each individual with his or her peers in the practice and with national averages.) We reviewed the responses to both evaluation tools, but we focused on their answers to the open-ended questions. I noted each provider's perceived barriers and needs so that we could address them in the future. We discussed and reinforced each provider's personal goals, and I compiled a list of all the providers' practice goals for discussion at a future staff meeting. This phase of the evaluation process didn't produce results that are readily measurable or reportable, but it did begin communication about performance, particularly the “new” notion that customer service and patient satisfaction are as important as productivity and clinical competence when it comes to personal and practice goals.
Finally, I asked each provider for feedback about the process and suggestions for improvement. Many commented on the time needed to complete a written self-evaluation and the difficulty of the task (e.g., “I never did well on essay tests”). Several providers pointed out the importance of the process and the likelihood that it would increase the staff's professionalism. All the providers considered the checklist easier to fill out, and of course its data was more quantifiable. The providers considered the goal setting a good idea and regarded the overall process as thought-provoking.
After these individual reviews, the group met to review the practice goals identified in the open-ended self-evaluation. We recognized that they could be summarized in a few broad categories: improving access and productivity, increasing attention to patient satisfaction and improving office operations. As a result, we decided to open the practice to new patients and move forward with plans for a new information system for registration and billing. We also agreed to use specific targets for productivity (quarterly billed RVUs) and patient satisfaction scores in our incentive compensation formula.
Traditional performance evaluation doesn't work well in modern medicine. But an ongoing evaluation process based on continuous quality improvement can facilitate collaboration among providers, enhance communication, develop goals, identify problems (which then become opportunities) and improve overall performance.
When you begin a performance evaluation process, you must establish a baseline and then collaboratively define the individual performance standards. Self-evaluation can produce honest appraisals and contribute meaningful information for this initial phase. Self-evaluations should be balanced by measurable data about productivity and the effectiveness of the physician-patient encounter. Since encounters can't be observed directly, measurements of patient satisfaction, outcomes and quality indicators serve as useful proxies. These elements — self-evaluations as well as quantitative data on productivity, patient satisfaction, and patient outcomes — are the minimum elements that should be used to define performance standards.
Creating and carrying out a performance evaluation process is hard work. The tools I developed were a good first effort, but they took too long for the providers to complete. Self-evaluation tools should be administered and reviewed in a relatively short time to enhance the feedback and goal setting that results. In the future, I plan to incorporate features of both tools into a single checklist with expanded areas for making comments and listing goals and needs.
As a group, we still have to agree on the performance standards for the next review. I also hope to have better data on productivity and patient satisfaction to share with the group for that process. And we must analyze the results of all our measurements regularly to identify the improvements we make and the goals we meet. Through this process, our group will increase the value we offer our patients and our providers.