The aim of the open educational resource (OER) development movement is to provide free access to high-quality educational materials in repositories. However, having access to a large amount of educational materials does not provide any assurance of their quality, and the mechanisms so far used to recommend educational resources have shown themselves to be lacking for a variety of reasons. Most evaluation systems are based on a costly manual inspection, which does not allow all materials to be evaluated. Moreover, it is often the case that other useful pieces of information are ignored, such as the use that users make of the materials, the evaluations that users perform on them and the metadata used to describe them. To try and improve this situation, this article presents the shortcomings of existing proposals and identifies every possible quality indicator that is able to provide the necessary information to enable materials to be recommended to users. By studying a significant set of materials contained in the MERLOT repository, the relationships among various, currently available quality indicators were analysed and numerous correlations among them were established. On the basis of that analysis, a measure of relevance is proposed, which integrates all existing quality indicators. Thus, the explicit evaluations made by users or experts, the descriptive information obtained from metadata and the data obtained from the use of the latter are employed in order to increase the reliability of recommendations by integrating various quality aspects. In addition, this measure is sustainable because it can be calculated automatically and does not require human intervention; this will allow all educational materials located in repositories to be rated.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below