Interpretable Dropout Prediction: Towards XAI-Based Personalized Intervention
Abstract: Student drop-out is one of the most burning issues in STEM higher education, which induces considerable social and economic costs. Using machine learning (ML) tools for the early identification of students at risk of dropping out has gained a lot of interest recently. However, there has been little discussion on dropout prediction using interpretable machine learning (IML) and explainable artificial intelligence (XAI) tools. In this work, using the data of a large public university, we demonstrate how IML and XAI tools can support educational stakeholders in dropout prediction. We show that complex ML models can efficiently identify at-risk students, however, they lack interpretability. Applying IML tools, such as partial dependence plot (PDP), and SHAP values, we demonstrate how the predictions can be explained both globally and locally. Explaining individual predictions opens great opportunities for personalized intervention, for example by offering the right remedial courses or tutoring sessions.