Detecting Dummy Learner Submitted Annotations in an Online Case Learning Environment

ID: 48593 Type: Virtual Paper
  1. Tenzin Doleck, McGill University, Canada
  2. Eric Poitras, University of Utah, United States
  3. Laura Naismith, University Health Network, Toronto Western Hospital, Canada
  4. Susanne Lajoie, McGill University, Canada

One of the key approaches in designing adaptive learning systems is the use of algorithms that can process and discover interesting, interpretable, and meaningful knowledge from the data tracked and logged by learning systems. Text classification has been employed with much success in a wide variety of tasks such as information extraction and summarization, text retrieval, and document classification. In this paper, we focus on discriminating between legitimate and dummy annotations in an online medical learning environment called MedU by infusing a text-classification based approach into the process. Manually detecting dummy annotations in MedU can be quite time-consuming, especially when it involves big data. Employing automatic text classification approach can mitigate the aforementioned issue. Moreover, a system capable of detecting learner submitted dummy annotations could be adapted to provide appropriate feedback to the learner.


Conference attendees are able to comment on papers, view the full text and slides, and attend live presentations. If you are an attendee, please login to get full access.