Algorithmic Biases

LMC 3304 - Andrew Gallasi, Jessica Ball, Evan Shellow, and Emelia Gapp

Either purposeful or accidental, engineers can enforce racial biases through the products and services they design. For example, the design of various bridges in Long Island in the 1930s were claimed to have had the political motivation of denying poorer African Americans the chance to visit the beach as their height prohibited public transportation buses from being routed into those areas. As a result, many African Americans were unable to enjoy the pristine beaches of Long Island. Though this form of racial bias in engineering may not be as blatantly public nor physical today, software engineering has instead taken up the mantle in enforcing racist ideals through various artificial intelligence algorithms. 

Bridges designed by Robert Moses that were purposefully too low for public transit buses to pass through excluded the poorer African American community from going to the Long Island Beaches

One such algorithm is the tailored marketing towards the African American community enforced by Netflix. Even though Netflix does not ask questions about race while creating new user accounts, the algorithm that chooses which movies and tv shows to display is able to make its own assumptions based on a user’s search and watch histories. If the algorithm identifies a user as black, it will display a multitude of black films and movies with black actors. This in turn gives the illusion that there are more films starring black actors and directors than in actuality, which creates a false sense of equality of representation in the movie industry. The algorithm further deceives Netflix users by displaying different movie posters to presumed groups of different people. For example, it will show movie posters with black supporting actors for movies with a predominantly white cast in order to present the illusion that it is a black feature film. Meanwhile, if the algorithm identifies a white user, it will display a movie poster for the same film with white actors on the front instead. The accompanying images clearly show this deception. The official movie poster for the film Like Father stars Kristen Bell and Kelsey Grammer. Yet Netflix has been proven to show a different movie poster on occasion which features the supporting black actors Leonard Ouzts and Blaire Brooks. 

Another evidence of racial bias among algorithms can be drawn from the widespread number of deep fakes during the recent election. Deep fakes are fake videos, pictures, and audio files that are intended to make a celebrity or political leader appear to say something that they never actually said. As a result, deep fakes are used to spread misinformation, erode trust in political candidates, and intimidate voters. By targeting deep fakes towards specific voting blocks, like the black caucus, malicious programmers can sway entire elections if the deep fake is effective enough. This in turn has the potential to have a harmful impact on the affected communities and can domino into a much broader shift in the political environment.

Deep Fakes of prominent politicians, such as President Obama, were used to spread misinformation during the 2020 election

In order to curb the impact of algorithms such as deep fakes, there must be better education directed towards the targeted communities and voting blocks. Misinformation can be fought against with media literacy and education about the information that is spread online. The social media platforms that inadvertently host the deep fakes need to employ better detection systems and stricter verification softwares of posts. The quicker that a deep fake can be removed from a website, the faster that the spread of misinformation will stop. 

These algorithms have the power to influence people on a large scale. Companies like Google have the power to reach millions of people and are trusted by the general public. Yet many of Google’s features demonstrate the biases that exist in their algorithms. Though these biases are through algorithms, there are people behind these algorithms that are responsible for these decisions, many of which display their racist or sexist biases. 

Google autosuggestions, for example, often propose racist or sexist ideas. Google auto suggests different biases about race and gender, as seen in the picture below. There are vast differences between the ways that Google makes suggestions for minorities and women versus for white men. This would be less of an issue if these algorithms were regulated in order to prevent this, especially on large scales that have the ability to influence millions of people. Google has become part of daily life and has thus become a source that is trusted by the public, making it crucial that their algorithms are regulated against biases like these. Creators of these algorithms have the power to impact the masses, and there should be regulations to manage the choices that they make.

Examples of biases seen in Google search autosuggestions

Although algorithmic bias has a clear impact on individuals and can affect society at large scales, federal regulation has proven slow and difficult. In the documentary “Coded Bias,” Cathy O’Neil, an expert on data science and inequality in algorithms, notes that “there should be an FDA for algorithms…[companies should show] that it’s going to be fair, that it’s not going to be racist, that it’s not going to be sexist...before [they] put it out.” No such regulatory body exists today.

Recent lawsuits have brought the topic of algorithmic bias to the federal court system concerning potential violation of the right to due process, but success in limiting the use of biased algorithms has been mixed. In the case of Houston teachers evaluated by a proprietary algorithm, a federal judge ruled that evaluation based on a secret algorithmic process denied teachers their right to challenge terminations—the secrecy of this potentially biased algorithm violated the constitutional right to due process (Hous. Fed'n of Teachers v. Hous. Indep. Sch. Dist.).

Daniel Santos, a Houston middle school teacher, was rated “ineffective” by a proprietary evaluation algorithm, despite excellent reviews from the administration. A lawsuit resulted in the use of this algorithm being declared unconstitutional.

 However, in another federal case, it was ruled that the use of the proprietary COMPAS algorithm—a system used to evaluate the risk of recidivism for incarcerated people and directly determine sentencing—did not violate the right to due process (State v. Loomis). Despite the secret workings of the algorithm and demonstrated racial bias of the COMPAS algorithm in research studies, a judge ruled that a defendant’s ability to see the ranked results of the algorithm (with no understanding of how those results were produced) constituted sufficient knowledge of the reasoning behind their sentencing .While these are just two contrasting cases in a rapidly growing list of legal cases concerning biased algorithms, they demonstrate the complexity and uncertainty surrounding any future federal regulation of algorithms.

A comparison of risk of recidivism scores for black versus white defendants indicates clear racial bias in the COMPAS algorithm, but its use for sentencing was upheld in a federal court case

Our purpose of including varying examples of algorithmic bias is twofold: to address the scope of the problem and to point out how pervasive algorithmic bias is in our daily lives. We see through these examples that algorithms play a part in our media consumption, how we receive and process information, and how we are legally represented before courts of law. In an ever-changing environment that is becoming more dependent on technologies and the improvement of those technologies, algorithmic bias is something that ultimately affects our daily operations. 

“The math-powered applications powering the data economy were based on choices made by fallible human beings. Some of these choices were no doubt made with the best intentions. Nevertheless, many of these models encoded human prejudice, misunderstanding, and bias into the software systems that increasingly managed our lives…[these models] tended to punish the poor and the oppressed in our society, while making the rich richer” (O’Neill 3). 

We have seen through these examples, other presentations, and the readings for this class that algorithms encode the biases that the engineers and developers carry with them into their jobs as builders, inventors, and problem-solvers. Given the widespread impact and injustice caused by algorithms, and the uncertain future of their federal regulation, the responsibility to prevent social harm from these varied algorithms falls on the engineers who design them. This presents a need for engineers to not only be more involved in the ethics of their fields but to be active in all realms which their work may affect. Systemic inequality is not something that happens in a vacuum, and it is the responsibility of engineers to actively work to confront their biases and how they may be put into their work. 

This starts in educational spaces like colleges and universities — here we have access to varying types of coursework and people who specialize in areas that extend beyond STEM. There is an opportunity to blend fields to get a more holistic understanding of the world around us, which engineering directly impacts in almost every way. Courses pertaining to the study of ethics and humanities requirements are important for the reasons we have discussed because they allow engineers and people in STEM to reach an understanding that extends beyond math and coding and physics. 

Engineers also have a responsibility to be active participants in their communities and in society at large. Issues of racism, gender equity, representation, and more do not stop when STEM begins; rather, the urgency of addressing these issues should be amplified in the spaces of our scientists and engineers. To right the wrongs that algorithmic bias causes, engineers should look to form a symbiotic relationship with the humanities and other fields which focus on issues of inequality to ensure a better future for the communities around them. 

“We need an FDA for algorithms that says, hey show me evidence that it’s going to work. not just to make you money, but that it’s going to work for society. That it’s going to be fair, that it’s not going to be racist, that it’s not going to be sexist, that it’s not going to discriminate against people with disability status. Show me that it’s legal, before you put it out. That’s what we don’t have yet.” -Cathy O’Neil

Bridges designed by Robert Moses that were purposefully too low for public transit buses to pass through excluded the poorer African American community from going to the Long Island Beaches

Deep Fakes of prominent politicians, such as President Obama, were used to spread misinformation during the 2020 election

Examples of biases seen in Google search autosuggestions

Daniel Santos, a Houston middle school teacher, was rated “ineffective” by a proprietary evaluation algorithm, despite excellent reviews from the administration. A lawsuit resulted in the use of this algorithm being declared unconstitutional.

A comparison of risk of recidivism scores for black versus white defendants indicates clear racial bias in the COMPAS algorithm, but its use for sentencing was upheld in a federal court case