Detecting Dummy Learner Submitted Annotations in an Online Case Learning Environment
Abstract: One of the key approaches in designing adaptive learning systems is the use of algorithms that can process and discover interesting, interpretable, and meaningful knowledge from the data tracked and logged by learning systems. Text classification has been employed with much success in a wide variety of tasks such as information extraction and summarization, text retrieval, and document classification. In this paper, we focus on discriminating between legitimate and dummy annotations in an online medical learning environment called MedU by infusing a text-classification based approach into the process. Manually detecting dummy annotations in MedU can be quite time-consuming, especially when it involves big data. Employing automatic text classification approach can mitigate the aforementioned issue. Moreover, a system capable of detecting learner submitted dummy annotations could be adapted to provide appropriate feedback to the learner.