This research paper highlights and addresses the lack of a systematic review of the methods used to evaluate Learning Analytics (LA) and Learning Analytics Dashboards (LAD) of Adaptive Learning Platforms (ALPs) in the current literature. Addressing this gap, the authors built upon the work of Tretow-Fish and Khalid (2022) and analyzed 32 papers, which were grouped into six categories (C1-6) based on their themes. The categories include C1) the evaluation of LA and LAD design and framework, C2) the evaluation of user performance with LA and LAD, C3) the evaluation of adaptivity, C4) the evaluation of ALPs through perceived value, C5) the evaluation of Multimodal methods, and C6) the evaluation of the pedagogical implementation of ALP’s LA and LAD. The results include a tabular summary of the papers including the categories, evaluation unit(s), methods, variables and purpose. While there are numerous studies in categories C1-4 that focus on the design, development, and impact assessment of ALP’s LA and LAD, there are only a few studies in categories C5 and C6. For the category of C5), very few studies applied any evaluation methods assessing the multimodal features of LA and LADs on ALPs. Especially for C6), evaluating the pedagogical implementation of ALP’s LA and LAD, the three dimensions of signature pedagogy are used to assess the level of pedagogy evaluation. Findings showed that no studies focus on evaluating the deep or implicit structure of ALP’s LA. All studies examine the structural surface dimension of learning activities and interactions between students, teachers, and ALP’s LA and LAD, as examined in categories C2-C5. No studies were exclusively categorized as a C6 category, indicating that all studies evaluate ALP’s LA and LAD on the surface structure dimension of signature pedagogy. This review highlights the lack of pedagogical methodology and theory in ALP’s LA and LAD, which are recommended to be emphasized in future research and ALP development and implementation.
Se artiklen her