Algorithms of Prejudice: How Artificial Intelligence Aggravates Structural Inequities in the US

Artificial Intelligence (AI) is infiltrating every facet of politics, specifically in the form of data-driven policy approaches. It is a powerful tool that can assist policymakers in painting a precise image of challenges facing the government. Today, AI makes it easier to evaluate patterns in data collected by the government, facilitating informed policymaking, targeted advertising, and the education of uninformed voters. As AI becomes more prevalent in global politics and plays an increasing role in legislative decision-making, it starts to determine who gets access to employment opportunities, financial assistance, affordable housing, educational programs and more. However, it is also crucial to address the ethical issues that may arise with the use of artificial intelligence in politics. Today, world leaders are focusing on growing concerns about the accuracy of systems that can identify, profile, and track people. 

 The root of the problem is the biased data used in training AI systems. Marginalized people (such as racial minorities, undocumented immigrants, people with disabilities, and people living in poverty) don’t have equal access to technology, leading to less engagement with data-generating activities. Thus, predictions AI makes for minority group outcomes across vital spheres like criminal justice and welfare tend to be negatively biased and inaccurate. As a result, these groups experience unfair discrimination based on their race, age, gender, disability, or sexual orientation. One prominent example is Amazon’s recruitment algorithm, which was engineered using resumes sent over a 10-year period. To establish an applicant's suitability, the algorithm analyzed word patterns in resumes instead of skill sets; given the majority of resumes received were those of white males, resumes containing words like "women" were punished. After detecting gender bias in the system, Amazon stopped using this recruiting algorithm – but not until 10 years’ worth of resumes from women were sent to the bottom of the pile.

Governments have myriad ways of building our digital identities, such as arrest data, mass surveillance records, location identification, web-tracking, passport applications, and sent/received emails. However, if data has inherent biases originating from past human choices, the algorithm will pick up on those biases and, in some cases, amplify them. For instance, the US government has put AI’s capabilities to use in discharging public services like crime protection. The fact that the US government uses criminal history records as data in these AI systems can give rise to algorithms that are a mere reflection of racial inequality in the criminal justice system. The Los Angeles Police Department, for example, usespredictive policing (PredPol) for crime detection and prevention. PredPol examines the types of crimes that have been committed in certain areas, the time at which they occurred, and whether another crime is likely to take place there, thereby enabling property crime prediction. PredPol generates maps containing 500-by-500-foot hotspots, which are then used by local police stations to determine their patrols. Residents in “hotspot” regions feel unsafe due to continuous police presence and due to the excessive violence employed by the police officers, highlighting the enormous risks that biased data may heighten for minority groups. Unfortunately, there is already overwhelming evidence that the criminal justice system treats these residents unfairly. 

Furthermore, AI techniques can help political figures target, manipulate, and exploit potential voters via personal/emotional appeals and targeted dissemination of erroneous information on social media. A famous example is the Pro-Trump bots using Twitter hashtags to spread automated content containing false information during the 2016 presidential election. Pro-Trump bots outweighed pro-Hillary Clinton bots by a five-to-one ratio, changing the course of an election and becoming a critical example of the impact AI can have in destabilizing global democracy. Social media platforms now have unprecedented access to information, but with little accountability for how that information is used to build harmful artificial intelligence systems. Concerns about AI’s potential misuse have prompted lawmakers to develop federal standards that will serve as building blocks for reliable, robust AI systems. Although groups like the US National Institute of Standards and Technology (NIST) initiative have made strides toward this goal, the US government still needs to improve the present legal system’s limitations in coping with algorithmic harms. Legal liability should be the solution when human rights principles are at stake.

As AI’s role in the political decision-making processes has grown, whistleblowers and public outrage have thrust algorithmic bias and misinformation into the spotlight. Despite the ACLU and politicians on both sides of the aisle urging the Biden Administration to reduce the dangers of AI, Biden’s AI policies place little emphasis on algorithmic discrimination. Unsurprisingly, the Trump Administration didn’t pay much attention to discrimination attributable to algorithmic decision-making either. While developing its AI policy, the US government should prioritize equitable social opportunity and data protection. To reduce destructive technological practices and to protect the population from the misuse of algorithmic systems, the Biden Administration can use legislation. Estonia, Norway, Finland, and Sweden rank in the top five in terms of responsible AI use. These countries pay attention to the OECD’s AI principles, namely inclusivity, accountability, transparency, privacy, and the duty to develop technology policies in a way that benefits all citizens. To avoid discriminatory consequences for marginalized groups in society, the US government must incorporate these principles into its technological strategy. Otherwise, AI will exacerbate existing inequalities.

To properly implement AI oversight programs in the United States, the government must commit to major investments in AI research. The US government has access to large amounts of data regarding social security, healthcare, taxation and so forth. Deep learning, computer vision, machine translation, speech recognition, and robotics are techniques that have the potential to improve the efficiency of policymaking at each stage in these fields. When AI performs repetitive, time-consuming, and mundane tasks, it also allows the labor force to concentrate on tasks that demand human input. There are many challenges along the way, but the opportunities are endless. Boston Consulting Group suggests that oversight programs would do best to follow the principles of "responsible AI". Only when these principles are followed can AI be properly used to boost productivity and overall efficiency without sacrificing equity. As Dunja Mijatović, Council of Europe Commissioner for Human Rights, once said, There is a clear interconnection between artificial intelligence systems and the quality of our democracies. Good governance can make the best out of technology for humans and the living environment. Bad governance will exacerbate inequality and discrimination and challenge fundamental democratic values.”

AI has changed the future of US politics significantly in recent years, showing that political leaders have to be especially careful regarding its aforementioned dangers. The US government should take concrete steps to eradicate AI-generated discrimination and to safeguard vulnerable communities facing systemic prejudice. American AI policy must prioritize ethical norms and legislative accountability for the consequences of using artificial intelligence, thereby allowing AI to be used for its original purpose: the betterment of collective society.