Using Critical Appraisal to Find the Best Resources in the Midst of ‘Information Overload’
The rise of evidence-based practice (EBP) has been transformative in accomplishing value-based provider goals and improving patient outcomes. Over 12,000 new articles are entered into online libraries and
databases every week, effectively causing an ‘information overload’ of potentially useful and possibly, harmful research.
How can clinicians and students conscientiously and efficiently sift through volumes of medical information? Is EBP an “every clinician for himself” cocurricular, or can health information professionals (HIPs) such as medical librarians act as the first line of defense against unreliable resources?
Critical appraisal (CA) is the “application of rules of evidence to a study to assess the validity of the data, completeness of reporting, methods and procedures, conclusions, compliance with ethical standards, etc. The rules of evidence vary with circumstances.”
Implement Evidence-based Practice through Critical Appraisal
Critical appraisal as an essential precursor to implementing EBP is a taught skill more HIPs are learning to disseminate among healthcare professionals.While there remains a need for set standards or a reliable instrument for evaluating literature, experts in the information profession agree that appraisal criteria cannot be static. Clinicians need to remain wary of intrinsic sources of bias within each study design, along with covert conflicts of interest.
A number of factors can affect the overall quality of a single research article. Clinicians looking to improve patient outcomes through EBP may sooner benefit from a simple 10-item CA checklist compared to lengthy/multiple training sessions:
Ten fundamental CA questions when evaluating EBP literature:
1. Is the study question relevant?
To be of any value, a study needs to address an important topic.
2. Does the study add anything new?
Novel contributions to evidence bases are rare compared to papers that advance what is already known on the subject. The latter elevates confidence in the validity of the findings, and allows sounder generalizations.
3. What type of research question is being asked?
Clinical research questions generally have 2 types:
- Effectiveness of Treatment – Is one treatment better than another when evaluating clinical benefit or harm, or cost-effectiveness?
- Frequency of Events – What is the prevalence or incidence rate of the disease and related phenomena (risk factors, diagnosis, prognosis, specific clinical outcomes, etc.)?
4. Was the study design appropriate for the research question?
Different study designs follow a hierarchy based on the likelihood of bias. Randomized Controlled Trials (RCTs) and their meta-analyses contain the strongest evidence. These are followed by:
- Cohort Studies
- Case-Control Studies
- Other observational study designs
5. Did the study methods address the most important potential sources of bias?
Two types of research bias can cause a deviation in results:
- A chance or random error affects the precision of the study
- A design flaw or systematic bias causes a shift in direction (overestimation or underestimation)
One the study design is identified, we recommend using one of the many design-specific CA tools from The Critical Appraisal Skills Programme (CASP).
6. Was the study performed according to the original protocol?
Any deviation from a set plan can affect a study’s validity or relevance. Common examples include:
- Failing to recruit the planned number of participants
- Altering exclusion and inclusion criteria
- Changing treatments and interventions
- Changing techniques or technologies
- Modifying lengths of follow up
7. Does the study test a stated hypothesis?
Identifying a clear statement of what the study is expecting to learn should be done a priori (before conducting the study). Attempts to study statistically significant associations that were not originally stated in the hypothesis have a high likelihood of producing false-positive findings.
8. Were the statistical analyses performed correctly?
While nonstatisticians often find this aspect of CA intimidating, all quantitative studies are required to present the ‘Method’ section in reader-appropriate language. Any missing data (i.e. participants lost in follow-up) must be declared in the ‘Results’ section.
9. Does the data justify the conclusions?
Bearing in mind that the most common deviation from a study’s protocol is failing to recruit the planned number of participants, CA helps identify any overemphasis placed on seemingly significant differences. Alternatively, it also screens for any data omission within a small sample size.
10. Are there any conflicts of interest?
These are defined as personal factors that have the potential to influence professional roles or responsibilities. Examples include:
- Participant recruitment criteria
- Which clinical phenomena to report as adverse effects
- Receiving monetary compensation from a company sponsoring the study
- Ownership of stocks in sponsoring company
- Ownership of other fiscal assets, such as patents and licensing
As champions and gatekeepers of information, medical librarians are uniquely positioned to further critical appraisal in the era of big data and EBP in healthcare. A 2010 study surveyed and interviewed over 75 medical librarians on their current involvement in critical appraisal and their attitudes towards its delivery and training. A large majority (73%) agreed on the need to be involved, but only 29% have actually participated in critical appraisal delivery and training.
The Medical Library Association (MLA) has a handful of CA webinars scheduled for the month of June:
Critical Appraisal for Librarians: Evaluating Randomized Controlled Trials Tuesday, June 11, 1:00 p.m.–2:30 p.m., central time.
Tuesday, June 11, 1:00 p.m.–2:30 p.m., central time.
Critical Contributions: Developing Research Appraisal Skills at Your Institution
Wednesday, June 26, 1:00 p.m.–2:30 p.m., central time.
Critical Appraisal Webinar Series: Four Webinars for the Price of Two!
June 11 and June 26, 2019 1:00pm–2:30pm, Central Time
Booth, A., & Brice, A. (2003). Clear-cut?: Facilitating health librarians to use information research in practice. Health Information and Libraries Journal, 20(S1), 45-52. doi:10.1046/j.1365-2532.20.s1.10.x
Maden-Jenkins, M. (2010). Healthcare librarians and the delivery of critical appraisal training: Attitudes, level of involvement and support. Health Information & Libraries Journal, 27(4), 304-315. doi:10.1111/j.1471-1842.2010.00899.x
Young, J. M., & Solomon, M. J. (2009). How to critically appraise an article. Nature Clinical Practice Gastroenterology & Hepatology, 6(2), 82-91. doi:10.1038/ncpgasthep1331