Improving the Methodological Quality of Single-Case Experimental Design Meta-Analysis

  • Jamshidi L
N/ACitations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

Single-case experimental design (SCED) studies are becoming more prevalent in a variety of different fields and are increasingly included in meta-analyses (MAs) and systematic reviews (SRs). As MA/SR's conclusions are used as an evidence base for making decisions in practice and policy, the methodological quality and reporting standards of SRs/MAs are of uttermost importance. One way to improve the reliability and validity of SCED MAs and therefore to provide more confidence in MA/SR findings to practitioners and clinicians to decide on a particular intervention is the use of high-quality standards when conducting and reporting MAs/SRs. In the current study, some existing tools for assessing the quality of SRs/MAs that might also be helpful for SCED MAs will be reviewed briefly. These tools and guidelines can help meta-analysts, reviewers, and users to organize and evaluate the quality and reliability of the findings. In order to investigate a certain intervention effect, the classic research design is a group comparison experimental design. In this kind of designs, the participants are randomly assigned to either intervention or control groups and the means of one or more dependent variables are compared to assess the effectiveness of the intervention. In order to get reliable effect size estimates and reach an acceptable level of statistical power, a large sample size of study participants is required in these designs. Single-case experimental designs (SCEDs) are alternative research designs that do not require many participants (or cases) and therefore are well suited to be used for studying rare phenomena, e.g., specific diseases or disabilities 1-3. In this kind of designs, outcomes of interest are measured repeatedly for one or multiple cases under at least two conditions (i.e., typically a control phase followed by an intervention phase). Within each specific case, the measurements are compared across conditions or phases to investigate whether introducing the intervention has a causal effect on one or more outcomes 2,4-7. SCEDs are frequently used in a variety of different fields such as psychology and educational sciences to evaluate the effectiveness of interventions of interest 7-11. Due to the small number of participants, the main issue of SCEDs is limited generalizability of their findings. To overcome this issue of generalizability, SCEDs can be replicated across participants, and systematic review (SR) approaches can be applied to synthesize the results 4,12,13. A SR is a kind of literature review to identify, evaluate, and aggregate all relevant studies on the same topic. In order to

Cite

CITATION STYLE

APA

Jamshidi, L. (2018). Improving the Methodological Quality of Single-Case Experimental Design Meta-Analysis. JOURNAL OF MENTAL HEALTH AND CLINICAL PSYCHOLOGY, 2(4), 1–9. https://doi.org/10.29245/2578-2959/2018/4.1140

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free