Introduction
Critical appraisal is a process which is used to help you identify the strengths and weakness of a research paper. Understanding how appropriate the study design is for the question you are seeking to answer, how well the study was carried out, and how good the reporting in the paper is helps you to assess whether the paper is likely to provide reliable evidence.
This page is designed to help you appraise the report of a case control study. Answering the questions will help you to reflect on how valid the results might be, how well reported they are and whether they are applicable to your local circumstances.
Download the checklist
Download a PDF copy of the case control checklist to complete.
4 pages
331KB
Case control study checklist
For each question think about whether the answer is yes, no or not sure and what your reasoning is for that answer.
1. Did the study address a clearly focused question?
Are the patient/population and risk factors clearly stated? Is the study looking for a beneficial or harmful effect?
2. Was an appropriate method used to answer the question?
Is the use of a case control method, which is usually only used for rare conditions or harmful outcomes, appropriate?
3. Were the cases recruited in an appropriate way?
Is there a clear definition of the cases? Did the cases represent a defined population? Was there a reliable system for selecting cases? Was the timescale relevant? Was there a sufficient number of cases? Was there a power calculation?
4. Were controls selected in an appropriate way?
Look for any bias in the selection which could compromise the results. Were the controls representative of the defined population? Were the controls matched or randomly selected? Were there a sufficient number of controls?
5. Was the exposure accurately measured to minimise bias?
Was the exposure clearly defined and accurately measured? Have the measures been validated? Were the measurements used the same for both the cases and controls?
6. What confounding factors have the authors accounted for?
List the ones you think are important. Can you think of any that have been missed? Confounding occurs when the link between exposure and outcome is distorted by another factor.
7. Have potential confounding factors been taken into account in the design and or analysis?
Confounding occurs when the link between exposure and outcome is distorted by another factor. These should be in the methods section. Look for factors that were not considered according to your clinical judgment. A study that does not address confounding should be rejected.
8. What are the results of the study?
What outcomes were measured? How strong is the association between exposure and outcome? Is the analysis appropriate?
9. How precise was the estimate of risk?
Look for confidence intervals.
10. Do you believe the results?
A large effect has to be taken seriously. Can the result be due to chance? Have you spotted flaws that make the results unreliable?
11. Can the results be applied to your practice?
Are the subjects similar to your population? Does your setting differ significantly? Can you gauge benefit and harm for your local situation?
12. Do the results fit with other available evidence?
Consider evidence from other study designs for consistency.
Try it out yourself
You could use the following paper to try out the questions:
Hayes, H. et al (1991) Case control study of canine malignant lymphoma: Positive association with dog owner’s use of 2, 4-dichlorophenoxyacetic acid herbicides. Journal of the National Cancer Institute, 83 (17), pp. 1226-1231. DOI: https://doi.org/10.1093/jnci/83.17.1226