brand logo

Am Fam Physician. 2011;83(10):1160-1162

Author disclosure: Nothing to disclose.

Purpose
In AFP Journal Club, three presenters review an interesting journal article in a conversational manner. These articles involve “hot topics” that affect family physicians or “bust” commonly held medical myths. The presenters give their opinions about the clinical value of the individual study discussed. The opinions reflect the views of the presenters, not those of AFP or the AAFP.
This Month's Article
Englund M, Guermazi A, Gale D, et al. Incidental meniscal findings on knee MRI in middle-aged and elderly persons. N Engl J Med. 2008;359(11):1108–1115.

Can you make the assumption that, in a patient with knee pain, meniscal damage found on MRI is responsible for his or her symptoms?

What does this article say?

Mark: This was a study of Framingham patients older than 50 years who were randomly selected to have magnetic resonance imaging (MRI) of their knees without regard to whether they had knee pain. Overall, 1,039 patients were screened, and of these, 991 had a readable MRI. All MRI films were read by one person with a background in orthopedics who looked exclusively at the right knee. If the reader wasn't certain about the results, a musculoskeletal radiologist overread the film. Neither reader had knowledge of the patient's clinical history (having or not having knee pain). This is important because if the readers know the patient's clinical history ahead of time, they will often change their reading (review bias). Interobserver agreement (kappa) was 0.72. Plain radiography was also performed in 963 patients, and these films were read by a single musculoskeletal radiologist (kappa = 0.83).

Kappa is a calculation of interobserver agreement beyond chance alone. For example, if two radiologists read the same film, what is the likelihood they will agree on the reading more than by chance alone? Kappa is usually scored as follows: 0 = no agreement; 0 to 0.2 = slight agreement; 0.2 to 0.4 = fair agreement; 0.4 to 0.6 = moderate agreement; 0.6 to 0.8 = substantial agreement; and 0.8 to 1 = almost perfect agreement.

Bob: The use of kappa in this study is a bit strange. A second person read the MRI film only when the first reader wasn't sure of the reading. Because the kappa measures agreement between the two readers, it would be tough to calculate a good kappa when one reader wasn't committing to a diagnosis. Additionally, the only films read by the orthopedic radiologist were those that the first reader was unsure about. To calculate a valid kappa, both should have read all films. A better design would have been to have two readers look at each film independently. If they disagreed, a third reader could adjudicate.

There is a second use of kappa in this study, with regard to the reading of the plain films (kappa = 0.83). Only one radiologist read the films, yet the study authors provide a kappa value. In this case, kappa is used to make the assumption that the reader had the same skills as readers in a previous study (the kappa of 0.83 must have come from prior studies, because only one person read the films in this study). This is an unorthodox use of kappa, and it lessens my confidence in the findings of this study.

Mark: Thirty-five percent of patients (95% confidence interval, 32 to 38 percent) had meniscal damage, and 31 percent had a meniscal tear. Overall, 82 percent of patients with tibiofemoral osteoarthritis had coexisting meniscal damage. This went up to 95 percent in those with severe osteoarthritis. As expected, damage became more common with older age (more than 50 percent in persons 70 to 90 years of age). Importantly, 45 percent of those with meniscal tears and 25 percent of those without meniscal tears had “knee pain, aching, or stiffness on most days.” This means that most meniscal tears (55 percent) are asymptomatic. So, the finding of meniscal damage doesn't automatically mean it is responsible for the patient's pain.

Should we believe this study?

Mark: Notwithstanding the use of kappa, there is an invaluable common sense lesson from the data. First is the obvious: MRI is of little predictive value in patients presenting with chronic knee pain. Overall, 95 percent of patients with severe osteoarthritis will also have evidence of meniscal damage. This makes MRI almost useless in this group if you are trying to determine whether the patient's pain is from a meniscal tear or meniscal damage. And, the reverse is also true. Negative MRI does not rule out knee pathology. In fact, MRI is not the greatest test for diagnosing knee disorders. One study showed that in patients with acute knee pain, MRI identified only 67 percent of the lesions found on arthroscopy.1

Bob: These exact numbers may not apply to patients who present to your office with knee pain. The population studied was not a clinic population, which introduces possible selection bias. However, the principle is sound.

Andrea: More generally, do not make the assumption that just because you find something on a test, it is the cause of the patient's symptoms. I am sure that everyone has seen a patient in whom it is not clear what is going on, so someone ordered every test in the book; when one of these tests is positive, the patient's symptoms are ascribed to that abnormality. Tests (in general) cannot be used in isolation to make a diagnosis.

Mark: This also raises the issue of incidental findings resulting from testing; for example, up to 24 percent of chest computed tomography scans and 10 percent of brain MRI scans find some incidental abnormality.24 We can now add knee MRI to this list. The problem is that these patients are put through never-ending rounds of tests and interventions, all of which have their own associated costs and morbidities, including the “we found something in your brain/lung/abdomen” conversation, which is uncomfortable for physicians as well.

What should the family physician do?

Andrea: Don't order any test unless you have a good idea of how the test will change the probability of disease. Will it be enough to make a diagnosis? If not, you don't need it. Equally, be circumspect when you are faced with a positive test result. If the clinical picture doesn't fit well, don't believe the test. In many cases, even if the clinical picture does fit, be wary of the result if the test has not been shown to have a high positive predictive value.

Bob: Don't do “shotgun” testing. You are likely to discover an incidental finding that you are not sure what to do with, which will require follow-up testing.

Mark: Obtain knee MRI only if symptoms are consistent with a specific lesion. If you get an MRI for chronic, generalized knee pain, you will face the question of whether the findings are causing your patient's pain, because 55 percent of meniscal tears are asymptomatic and up to 95 percent of those with severe osteoarthritis will also have meniscal damage. There is no guarantee that your patient's pain is from the meniscus, even with positive MRI.

Main Points
• Meniscal findings on knee MRI do not correlate well with pain. Do not assume that the meniscus is the root of the patient's problem, even with positive MRI.
• Knee MRI is not particularly good at evaluating knee pain. If you have a patient who seems to have a meniscal injury (e.g., locking, positive McMurray test), try watchful waiting for the first six to eight weeks. If the pain or dysfunction persists, consider MRI, but be aware of its limitations. If MRI is negative, you may still need to consider referral for arthroscopy.
• Do not do “shotgun” testing. You may find something that you are not sure what to do with.
• A positive test does not necessarily indicate what is causing a patient's symptoms.
EBM Points
• Kappa is a measure of interobserver reliability (e.g., the probability that two radiologists reading the same film will get the same answer beyond chance alone). It is generally scored as: 0 = no agreement; 0 to 0.2 = slight agreement; 0.2 to 0.4 = fair agreement; 0.4 to 0.6 = moderate agreement; 0.6 to 0.8 = substantial agreement; and 0.8 to 1 = almost perfect agreement.
• Review bias occurs when the reader of a test (e.g., radiography, electrocardiography) knows the patient's history. The history changes the way a test is read.
• Although this study was more or less methodologically sound, a better strategy would have been to have two readers read each film, then have a third party adjudicate if the first two readers disagreed. This is generally accepted methodology.

Continue Reading


More in AFP

More in PubMed

Copyright © 2011 by the American Academy of Family Physicians.

This content is owned by the AAFP. A person viewing it online may make one printout of the material and may use that printout only for his or her personal, non-commercial reference. This material may not otherwise be downloaded, copied, printed, stored, transmitted or reproduced in any medium, whether now known or later invented, except as authorized in writing by the AAFP.  See permissions for copyright questions and/or permission requests.