To the Editor:
Physicians need reliable methods for appraising studies for validity and relevance; they also want something quick. While Dr. Robert Flaherty’s article on the “PP-ICONS” approach makes some excellent points, it is insufficient for determining validity. The PP-ICONS approach cannot guarantee that you have identified serious, and potentially fatal, threats to validity, such as improper randomization methods, bias and confounding. Relying on this approach could result in applying misleading results. Only by reviewing study methods are you likely to determine bias in controls and subject selection, along with a host of other critical considerations.
Knowing that a study is a randomized controlled trial (RCT) is insufficient. There are instances when an observational study can be labeled as an RCT but a close review of the methods shows that it is not.1 Authors have shown that inadequate concealment and blinding can bias a study in favor of an intervention.2,3 Furthermore, authors often mislabel what they are doing, as Kruse et al show in a review of reported intention-to-treat analyses.4
Dr. Flaherty states that you can use the PP-ICONS method for articles on diagnostic treatment and screening. Diagnostic testing is much more complex, requiring assessments of “gold standards,” blinded evaluators, subjects who have the condition along with those who do not, and evaluations of measures of test function. Limner et al show problems with diagnostic test articles in a systematic review of 218 studies where only 15 satisfied criteria for good studies.5 Screening adds other dimensions when assessing validity, including lead- and length-time bias, and early versus late diagnosis and treatment options.
Dr. Flaherty also states that when using PP-ICONS, the abstract alone is often sufficient. However, a study by Pitkin shows that 18 to 68 percent of abstracts in six of the most respected journals contain information not verifiable in the body of the text.6 Abstracts often help weed out problem studies, but they are not reliable to determine validity.
On a similar note, Brandi White’s article “Making Evidence-Based Medicine Doable in Everyday Practice” [February 2004, page 51] should warn readers to beware of poor or dated systematic reviews and clinical practice guidelines that could provide misleading, and possibly harmful, information. Unless you use trusted sources, such as Clinical Evidence (http://www.clinicalevidence.com), Cochrane (http://www.cochrane.org) or the Database of Abstracts of Reviews of Effects (DARE) (http://www.york.ac.uk/inst/crd/darehp.htm), you need to critically appraise the guidelines or the reviews and assess the usefulness of the results found to be valid. And for guidelines and reviews from any source, you need to search for new important studies that have been appraised. Guidelines and systematic reviews are difficult to do well, and many are flawed. Many guidelines, even when bearing the label “evidence-based,” lack systematic development and critical appraisal of included content. Numerous so-called systematic reviews lack many, if not all, of the criteria for a systematic review and may be nothing more than narrative reviews or overviews, which can be highly prone to bias. In fact, large, well-done RCTs may be superior to systematic reviews in many instances.
Unfortunately, until we have more trusted sources (and ideally, even those should be critically appraised), shortcuts will likely continue to run the risk of applying misleading clinical information. From our experiences in the health care field, and as university-affiliated educators and trainers of evidence-based principles through our consulting company, Delfini Group, we believe that all clinicians should have a basic understanding of the key concepts of critical appraisal so they are not “had” by misleading information. The good news is that these skills are not that difficult to acquire. They can save you time and, more important, result in better care and better use of resources.
Ms. Strite and Dr. Stuart illuminate an important problem with the current application of evidence-based medicine (EBM) to real-life practice. I think we all agree that the purpose of EBM is to identify the best existing evidence – research that is relevant, valid and applicable to clinical questions that will improve patient care.
Most clinical questions have not yet been rigorously reviewed by reputable organizations such as Cochrane. Thus, busy practicing physicians must develop an efficient way to identify journal articles that are likely to be relevant and valid. However, many do not have the time or inclination to meticulously analyze each article they read. Ms. Strite and Dr. Stuart recommend a rigorous analytic method is described in a series by the Journal of the American Medical Association1 and by Miser.2 While highly effective, these techniques are simply too time-consuming and cumbersome to be applied in a busy practice.
Clinical medicine is a trade-off between academic rigor and practicality. A rigorous analytic technique may result in a high likelihood that the research is valid and relevant. But if that technique cannot be easily applied, it will not be used by physicians in a busy practice and will not help them provide better patient care. A simpler technique that can be applied quickly and easily, such as PP-ICONS, may not result in an academically solid assessment of validity and relevance, but physicians will still have a much better idea of the validity and relevance of an article than they would have by simply accepting the conclusions at face value.
I agree with Ms. Strite and Dr. Stuart that a simple assessment tool like PP-ICONS is not perfect. It is, however, a practical compromise that is much better than no assessment at all, and, by being quick and easy to apply, PP-ICONS can improve the literature review abilities of most busy physicians.