Discriminatory Biases in Data for Machine Learning and Human Rights

Abstract

The intersectionality between data generated by machine learning/algorithms and human rights may not be obvious at times and accepted as true most of the time. Algorithms are created by people hence they aren’t particularly sensitive to gender, social, racial, moral issues.Typically, human characteristics such as gender, race, socio-economic class determine our potential to achieve outcomes of some performance tasks. This process is problematic because directly sets expectations from a protected attribute. So, how do we then ensure that machine learning datasets are not embedded with racist, sexist and other potential violations of human rights? The objective of this study is to explain how we can create realistic algorithms and accurate datasets while upholding human decency and avoiding disparate treatment and impact. History and political systems may bend human rights disparities over time, machine learning cannot because it is doomed throughout its history with biases. So, where do we go from here? We can formalize a non-discriminatory criteria that optimizes fairness, a system in which a protected human characterizes is not related with some type of expectation for certain categories. Such topic is of great interest and importance because the continuation of wrongfully creating risk assessment algorithms can and will create deeper discrimination gaps and violations of human rights.



Author Information
Euxhenia Hodo, John Jay College of Criminal Justice, United States

Paper Information
Conference: IICE2022
Stream: Education

The full paper is not available for this title


Virtual Presentation


Comments & Feedback

Place a comment using your LinkedIn profile

Comments

Share on activity feed

Powered by WP LinkPress

Share this Research

Posted by James Alexander Gordon