To the Editor:
“Practical Evidence-Based Internet Resources” [July/August 2003, page 49] by Dr. Brian Alper highlights an important topic: How do physicians get answers to their questions at the point of care – and not just any answers, but answers based on the best available evidence? Unfortunately, the article and comparison table neglect the following important features of a good point-of-care, evidence-based source:
Are formal criteria for relevance used to filter the information for primary care physicians, reducing the amount of “noise”?
Is each item labeled with a “level of evidence,” or are recommendations labeled with a “strength of recommendation”?
Are tools provided to help clinicians apply clinical decision rules and interpret diagnostic tests?
Is the information fully peer-reviewed?
Does the source provide a tool for “foraging” (keeping up-to-date on a daily basis with new information) as well as “hunting” (searching for the answers to clinical questions)?
The article’s table also incorrectly states that InfoRetriever does not cite the best available evidence where rigorous evidence is lacking. This is simply not true; much of the information in the POEMs, diagnostic test database, Cochrane Database, Griffith’s 5-Minute Clinical Consult and practice guideline summaries (all part of InfoRetriever) fits this description. We agree that evidence-based medicine means basing decisions on the best available evidence, not the best possible evidence.
While we agree none of these tools is perfect, all provide valuable guidance to physicians at the point of care. As family physicians, we are not only in the health care business, we are in the knowledge business, and it is important that we become comfortable using advanced knowledge tools like those described in the article.
I appreciate the opportunity to address the concerns of the InfoPOEMs editors, who are pioneers in this area. I’d like to respond to several points:
DynaMed uses formal criteria for relevance based on InfoPOEMs criteria (designed for clinical alerting) and modified to update a clinical reference;
“Level of evidence” labels can be easily misinterpreted. The important feature for clinicians, stating methods and quality of evidence related to “facts” and recommendations, is done to varying degrees by all evidence-based resources;
“Foraging” tools are provided by InfoPOEMs (weekdays), DynaMed (whenever new updates warrant mass alerting), and UpToDate (“what’s new” summary every four months). This function is separate from answering clinical questions, and it warrants its own article.
Other “neglected” features were in fact addressed in the text of the article.
I stand by my original interpretation of best available evidence, defined as “high likelihood of finding research citations when the best available evidence consists of less rigorous studies.” Griffith’s 5-Minute Clinical Consult, the InfoRetriever component with greatest breadth, doesn’t typically provide research citations. The only evidence-based resources with evidence for answering a substantial proportion of clinical questions are DynaMed (55 percent) and UpToDate (34 to 45 percent).1,2,3