Why would a recommendation engine not suggest computer science classes to a female college student interested in that field of study?

According to Bert Huang, assistant professor of computer science in the College of Engineering and a faculty member at the Discovery Analytics Center, there are a few reasons. The engine may have trained from data representing the existing gender imbalance in computer science, unfair patterns may have inadvertently emerged from the mathematical nature of its learning algorithm and model, or there may be a less-visible or harder-to-detect process in place.  

“Recommendation engines can be unfair in that they may recommend beneficial products to some users but not others, or they may make more useful recommendations to some users than others. And the factors that determine which users get better quality recommendations may be based on irrelevant, unethical, or illegal discrimination," said Huang. 

In Huang's Machine Learning Laboratory, he and computer science Ph.D. student Sirui Yao are working on developing algorithms to measure and mitigate unfairness in recommendation engines. Recently, the lab received an Amazon Research Award of approximately $66,000 to support this research.

Huang said that other research being done to develop methods to measure and mitigate unfairness of such learning systems is focused almost exclusively on situations where the group identities of the users are known. 

“Our team addresses a different setting where we do not know, or we do not trust, information about who belongs to what groups,” he said. 

Often, said Huang, these recommendation engines have no information about group memberships, but even when they do, the groups are not well defined, and there are always people who defy definitions. Huang’s team works on algorithms that search over possible groups and try to make sure people are treated fairly across different group separations. 

“We are inspired by work in intersectionality, which tries to understand how different aspects of social identity intersect and exhibit different patterns of discrimination. We aim to imbue algorithms with this same sense of multifaceted identity when considering algorithm fairness,” he said.

“Amazon’s support of this project demonstrates its continued leadership in the area of recommendation engines,” said Huang. “The company has long been known for having one of the most impressive recommendation platforms in technology, and now we hope to use the support they have granted us to build tools that help ensure these recommendation engines have a net positive impact on society.”

Written by Barbara L. Micale

Share this story