Question: COMPAS, a system used in several judicial decision-making contexts, was trained to predict the likelihood of someone reoffending (i.e., committing a future crime) by providing
COMPAS, a system used in several judicial decision-making contexts, was trained to predict the likelihood of someone reoffending (i.e., committing a future crime) by providing a final risk score. The software provider did not reveal the ML algorithm(s) behind the system, the data used to train the system, or the training process (all considered trade secrets). The system ended up making serious errors, as reported in 2016 Links to an external site..
Based on the course readings (and what you have learned in this course), which of the following issues apply in this case?
Group of answer choices
More advanced deep learning models such as Transformers would have alleviated the problem
Possible discrepancy between data used to train the system and data provided as input upon deployment
Lack of Interpretability in the AI used in COMPAS
Lack of algorithmic transparency
The models used in the system most likely overfitted the training data
Step by Step Solution
There are 3 Steps involved in it
Get step-by-step solutions from verified subject matter experts
