Exploring Potentially Discriminatory Biases In Book Recommendation
MetadataShow full metadata
Recent issues which occurred in the field of artificial intelligence present disproportionality based on protected attributes such as sex, race, and ethnicity in their output had raised concerns. The algorithms used in AI may amplify or propagate biases which exist in the historical data and may reflect this in the output data. Computer world now does not consider this as an abstract fact and researchers are coming up with the new frameworks that modify the existing algorithms present in AI which aids these biases to be reduced to a reasonable level. Recommender System algorithms are well optimized with respect to accuracy and efficiency. But as recommender systems are built on top of Information Retrieval, Machine Learning, and Artificial Intelligence, these systems have high chances of producing a biased outcome. Our current research focus on building methodology for explores potentially discriminatory biases based on protected characteristics in Recommender System. Plus, the definition of discrimination in this work does not correlated with any particular definition which had been define in past. For this work we have taken Book Recommender as a basis for observation of the bias in both input and output of a recommender.