Using Artificial Intelligence to Detect Discrimination in Research
A new artificial intelligence (AI) tool for detecting unfair discrimination — such as on the basis of race or gender — has been created by researchers at Penn State and Columbia University.
Preventing unfair treatment of individuals on the basis of race, gender or ethnicity, for example, has been a long-standing concern of civilized societies. However, detecting such discrimination resulting from decisions, whether by human decision makers or automated AI systems, can be extremely challenging.
"Artificial intelligence systems — such as those involved in selecting candidates for a job or for admission to a university — are trained on large amounts of data," said Vasant Honavar, professor and Edward Frymoyer Chair of Information Sciences and Technology at Penn State and co-lead of Penn State Clinical and Translational Science Institute's Informatics Core. "But if these data are biased, they can affect the recommendations of AI systems."