Bias in Artificial Intelligence

What is bias in AI?

Training artificial intelligence (AI) involves five steps: data collection, data pre-processing, model selection, training, and evaluation. Training AI and the human brain work in similar ways. The human brain is adept at making predictions. The brain sees patterns and will make a prediction from a previous experience. It’s when someone gets nauseous after seeing a certain food that has made them sick before. Artificial intelligence also goes through a training process and learns through trial and error, which will then predict outcomes. In Microsoft Edge, there is an AI-powered recommendation tab based on the user’s previous interests. Sometimes, in the process of training AI, bias can slip in. This issue can occur when, during the AI training process, the data that is fed to the AI system produces systematically biased results. There are two reasons why an AI system has bias: cognitive bias, and a lack of complete data. Psychologists have defined and categorized more than 180 human biases. Some data sets can include those biases, or the person training the AI system may have unknowingly included them. The other reason is a lack of complete data. A lack of complete data means it is not representative and can include bias. For example, a survey could be sent out to a college that is predominantly white, which is a specific group of people and not the whole population of college students.


Legal Analysis

Last August, iTutorGroup, an English-language tutoring service for students in China, settled an employment discrimination lawsuit filed by the U.S. Equal Employment Opportunity Commission (EEOC). According to the EEOC’s lawsuit, iTutorGroup’s AI-powered hiring software would automatically reject male applicants over 60 and female applicants over 55. The EEOC filed the lawsuit in May 2022, which was the first of its kind. iTutorGroup will have to pay each of the 200 job candidates who were automatically screened and deemed “too old” $365,000. 

In 2018, Amazon discontinued a hiring algorithm used for hiring engineers. They discontinued it after they discovered that even when female applicants did not have their gender listed, the algorithm would still downgrade them. However, as Clark Hill attorneys Myriah Jaworski and Vanessa Kelly point out, the training data used for the hiring algorithm was 10 years old, when most of the candidates were predominately white males. Which led to the hiring algorithm favoring white men and penalizing the word “women.” Jaworski and Kelly wrote in a column for Law.com, “A well-intentioned race- or gender-neutral automated tool may not, in fact, be race- or gender-neutral.” 

Another example case that references the hiring algorithm used by Amazon is a class action lawsuit that was filed in February of this year against Workday, an HR software giant. Derek Mobley has applied to over 100 positions that have used Workday as their screening platform since 2017 and has been rejected from every single one of them. The suit accuses Workday of allowing intentional employment discrimination based on race, age, and disability. Mobley v. Workday has not been settled as of April 2024.


Ethical Analysis

In 2014, the Correctional Offender Management Profiling for Alternative Sanctions The (COMPAS) tool was created by the Northpointe Institute for Public Administration. COMPAS was used to predict which criminals in the United States had the highest chance of reoffending. But in 2016, ProPublica, an independent, non-profit investigative journalism company, discovered that COMPAS was more likely to say that black defendants were at a higher risk of reoffending than white defendants. But in 2016, ProPublica, an independent investigative journalism company, looked at more than 10,000 criminal defendants in Broward County, Florida and wrote an article on it. COMPAS could accurately predict a rate of around 60% for both black and white defendants. However, when all other variables (prior crimes, age, and gender) were controlled, COMPAS classified black defendants as having a higher risk, which was 77% more likely than white defendants. Fairness in AI is crucial in situations like this. A person shouldn’t be put on a high-risk list because an AI misjudges them based on their skin tone. Bias in AI isn’t narrowed down to just the incarcerated.


Conclusion

Training an artificial intelligence system is more than just feeding AI a data model. It includes a complex series of steps that are similar to human learning. Just like a human student, an AI system learns from the data it is being fed. Then reflect on these patterns and make predictions based on them. However even though AI has advanced capabilities it is not immune to biases. Biases can manifest due to a number of reasons, such as cognitive biases, or the use of incomplete data sets during the training process. AI can be unbiased, but it is up to the people training the AI models to ensure that. In order to minimize bias, it is crucial to use diverse and representative datasets during the training process. Additionally, the individuals training AI systems should be aware of their own biases and aim to mitigate them during the development process. It is also beneficial to include regular evaluations and adjustments to ensure the AI system remains unbiased and accurate over time.