Background: Evaluators are cognisant of the need to determine the effects of an intervention within its context. Objectives: In education evaluations, there was a gap in context-specific assessment tools to determine the status of school functionality with the ultimate aim of examining whether there is a relationship between school functionality context and teaching and learning outcomes. To meet evaluation standards, evaluators must ensure that evaluation tools and data are accurately measuring the indicators and variables. The focus of the article is on lessons learned from a tool validation process. These are shared to guide evaluators in similar settings. Method: Khulisa Management Services (Khulisa) has conducted research and evaluations in South African schools since 1993. In 2011, Khulisa developed a school functionality tool based on local and international literature, engagement with key stakeholders, and through a series of implementation phases over various evaluations. The tool identifies high functional, functional, stagnant but functional and dysfunctional schools. The authors of this article undertook a reflection process to evaluate the evidence gathered to support the meaningfulness, usefulness and appropriateness of the tool properties. Results: Lessons from the validation process include the need to build time and resources for validation from the beginning, validating a tool over time and across evaluations adds value, training of data collectors is critical, and analysis is important towards establishing the consistency and reliability of a tool. Conclusion: While reliability analysis and the validation process are ongoing, preliminary results show that the tool has potential to document context appropriately.
CITATION STYLE
Roper, M., Taimo, L., Bisgard, J. L., & Tjasink, K. (2020). Validating an evaluation school functionality tool. African Evaluation Journal, 8(1). https://doi.org/10.4102/AEJ.V8I1.423
Mendeley helps you to discover research relevant for your work.