Impact of summer programmes on the outcomes of disadvantaged or ‘at risk’ young people: A systematic review

0Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Review Rationale and Context: Many intervention studies of summer programmes examine their impact on employment and education outcomes, however there is growing interest in their effect on young people's offending outcomes. Evidence on summer employment programmes shows promise on this but has not yet been synthesised. This report fills this evidence gap through a systematic review and meta-analysis, covering summer education and summer employment programmes as their contexts and mechanisms are often similar. Research Objective: The objective is to provide evidence on the extent to which summer programmes impact the outcomes of disadvantaged or ‘at risk’ young people. Methods: The review employs mixed methods: we synthesise quantitative information estimating the impact of summer programme allocation/participation across the outcome domains through meta-analysis using the random-effects model; and we synthesise qualitative information relating to contexts, features, mechanisms and implementation issues through thematic synthesis. Literature searches were largely conducted in January 2023. Databases searched include: Scopus; PsychInfo; ERIC; the YFF-EGM; EEF's and TASO's toolkits; RAND's summer programmes evidence review; key academic journals; and Google Scholar. The review employed PICOSS eligibility criteria: the population was disadvantaged or ‘at risk’ young people aged 10–25; interventions were either summer education or employment programmes; a valid comparison group that did not experience a summer programme was required; studies had to estimate the summer programme's impact on violence and offending, education, employment, socio-emotional and/or health outcomes; eligible study designs were experimental and quasi-experimental; eligible settings were high-income countries. Other eligibility criteria included publication in English, between 2012 and 2022. Process/qualitative evaluations associated with eligible impact studies or of UK-based interventions were also included; the latter given the interests of the sponsors. We used standard methodological procedures expected by The Campbell Collaboration. The search identified 68 eligible studies; with 41 eligible for meta-analysis. Forty-nine studies evaluated 36 summer education programmes, and 19 studies evaluated six summer employment programmes. The number of participants within these studies ranged from less than 100 to nearly 300,000. The PICOSS criteria affects the external applicability of the body of evidence – allowances made regarding study design to prioritise evidence on UK-based interventions limits our ability to assess impact for some interventions. The risk of bias assessment categorised approximately 75% of the impact evaluations as low quality, due to attrition, losses to follow up, interventions having low take-up rates, or where allocation might introduce selection bias. As such, intention-to-treat analyses are prioritised. The quality assessment rated 93% of qualitative studies as low quality often due to not employing rigorous qualitative methodologies. These results highlight the need to improve the evidence. Results and Conclusions: Quantitative synthesis The quantitative synthesis examined impact estimates across 34 outcomes, through meta-analysis (22) or in narrative form (12). We summarise below the findings where meta-analysis was possible, along with the researchers' judgement of the security of the findings (high, moderate or low). This was based on the number and study-design quality of studies evaluating the outcome; the consistency of findings; the similarity in specific outcome measures used; and any other specific issues which might affect our confidence in the summary findings. Below we summarise the findings from the meta-analyses conducted to assess the impact of allocation to/participation in summer education and employment programmes (findings in relation to other outcomes are also discussed in the main body, but due to the low number of studies evaluating these, meta-analysis was not performed). We only cover the pooled results for the two programme types where there are not clear differences in findings between summer education and summer employment programmes, so as to avoid potentially attributing any impact to both summer programme types when this is not the case. We list the outcome measure, the average effect size type (i.e., whether a standardised mean difference (SMD) or log odds ratio), which programme type the finding is in relation to and then the average effect size along with its 95% confidence interval and the interpretation of the finding, that is, whether there appears to be a significant impact and in which direction (positive or negative, clarifying instances where a negative impact is beneficial). In some instances there may be a discrepancy between the 95% confidence interval and whether we determine there to be a significant impact, which will be due to the specifics of the process for constructing the effect sizes used in the meta-analysis. We then list the I2 statistic and the p-value from the homogeneity test as indications of the presence of heterogeneity. As the sample size used in the analysis are often small and the homogeneity test is known to be under-powered with small sample sizes, it may not detect statistically significant heterogeneity when it is in fact present. As such, a 90% confidence level threshold should generally be used when interpreting this with regard to the meta-analyses below. The presence of effect size heterogeneity affects the extent to which the average effects size is applicable to all interventions of that summer programme type. We also provide an assessment of the relative confidence we have in the generalisability of the overall finding (low, moderate or high) – some of the overall findings are based on a small sample of studies, the studies evaluating the outcome may be of low quality, there may be wide variation in findings among the studies evaluating the outcome, or there may be specific aspects of the impact estimates included or the effect sizes constructed that affect the generalisability of the headline finding. These issues are detailed in full in the main body of the review.

Cite

CITATION STYLE

APA

Muir, D., Orlando, C., & Newton, B. (2024, June 1). Impact of summer programmes on the outcomes of disadvantaged or ‘at risk’ young people: A systematic review. Campbell Systematic Reviews. John Wiley and Sons Inc. https://doi.org/10.1002/cl2.1406

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free