Artificial Intelligence is being employed by colleges to forecast student success. Frequently, these predictions miss the mark.
In July 2021, a groundbreaking study by researchers from the University of Texas at Austin shed light on a significant issue in the realm of AI and higher education - racial bias in predictive models. The study, published in the AERA Open, a peer-reviewed journal of the American Educational Research Association, highlighted concerns regarding the potential for AI algorithms to replicate and amplify racial biases present in historical data, leading to unfair outcomes for students of colour.
The key findings of the study were as follows:
- AI algorithms, when used to predict student success, can replicate and even amplify racial biases. This is due to the underlying data reflecting systemic inequities, such as differences in access to resources, prior educational opportunities, and institutional practices that disproportionately affect marginalized groups.
- To combat these biases, the researchers recommended developing bias mitigation techniques. This includes employing fairness-aware machine learning approaches and critically examining the data sources and features used by these algorithms.
- Another proposed solution was to integrate human judgment and contextual understanding to complement AI predictions. This ensures the systems support equitable student success interventions rather than replace human decision-making.
The study tested four predictive machine-learning tools commonly used in higher education. The models were trained on 10 years of data from the U.S. Department of Education’s National Center for Education Statistics, including 15,244 students. The models were found to inaccurately predict a student wouldn't graduate 12% of the time if the student was white, 6% if the student was Asian, 21% if the student was Hispanic, and 19% if the student was Black.
The models also incorrectly predicted success for white and Asian students at 65% and 73% respectively, compared to just 33% for Black students and 28% for Hispanic students. This indicates a clear disparity in the predictions made by the AI algorithms.
The study was motivated by the increasing use of AI and machine learning in college operations, particularly in predicting student success outcomes. It aimed to make crucial decisions like admissions and the allocation of student support services more equitable.
While schools and colleges typically use smaller administrative datasets compared to the one used in the study, the evidence of algorithmic bias highlights the importance of training end users on the potential for bias. Awareness of algorithmic bias, its direction, and the groups affected can help users contextualize predictions and make more informed decisions.
This 2021 research underscores the importance of transparency, inclusivity, and ongoing evaluation in deploying AI tools in higher education to avoid perpetuating racial inequities while supporting student success. As AI continues to play a larger role in education, it is crucial to address these concerns and work towards creating fair and equitable systems for all students.
- The study conducted in July 2021, focusing on AI and higher education, revealed that technology, when used for predicting student success, can imitate and intensify racial biases, which is a consequence of historical data reflecting profound systemic inequalities.
- In an effort to combat these biases, the researchers recommended implementing technology with bias mitigation techniques, such as using fairness-aware machine learning strategies, and critically assessing the data sources and features utilized by these algorithms.
- Another proposed solution was to incorporate human judgment and contextual understanding to supplement AI predictions, ensuring the systems facilitate equitable student success interventions rather than substituting human decision-making.