KT Models Based on Bayesian
Based on the BKT (Corbett & Anderson, 2005), which is a commonly used approach in establishing models for student learning sequences, an HMM is employed to treat a student’s knowledge state as a latent variable. Nevertheless, this methodology is oriented around KCs, where KC serves as a universal term encompassing knowledge points, concepts, skills, or items. Consequently, all students share a common set of parameters for a given KC, leading to a situation in which students at an intermediate or higher proficiency level persistently receive a substantial volume of recommended exercises, even after mastering a specific KC. This, in turn, necessitates the completion of redundant practice questions. To address this issue, many scholars are extending the BKT model from different perspectives to enhance its practicality and accuracy. Pardos and Heffernan (2010) propose a learning model whereby different students have different initial background knowledge probabilities, enabling more accurate estimates of when a student masters a KC.
As most of the extended BKT models are based on HMM, they assume a constant learning rate after answering questions, making it difficult to balance recent and historical exercise data for students, which is not in line with the objective reality. Agarwal et al. (2020) propose the MS-BKT model, replacing the fixed learning rate with a recent rate weight based on a student’s overall situation. This method captures the student’s progress through data rather than attempting to assume continuous learning. The model extends the knowledge states from the typical two states to 21 states, steadily updating estimates over time. This better captures the complexity of correct and incorrect sequences. Overall, Bayesian network-based methods have relatively simple model constructions but possess powerful interpretability.