Filed Under:  National

Artificial Intelligence could impact Black voting during 2024 elections Black leaders call for safeguards against it

20th November 2023   ·   0 Comments

By Barrington M. Salmon
Contributing Writer

(TriceEdneyWire.com) — For much of the last century, segregationists and their anti-Black racist allies who were intent on ensuring that African-Americans couldn’t exercise the right to vote, erected an assortment of barriers to that end.

Segregationists used the courts, local and state laws, literacy tests, poll taxes, fraud, brute force, violence and intimidation by the Ku Klux Klan to impede and prevent Black people from exercising their constitutional right.

In the 21st century, voter suppression has gone high-tech with the same characters still plotting to control who votes, when and how. They are employing an assortment of methods including Artificial Intelligence (AI). Concerns about misuse of AI in the electoral ecosystem is what brought Melanie Campbell and Damon T. Hewitt to testify before the U. S. Congress.

Campbell, President & CEO of the National Coalition on Black Civic Participation (NCBCP) and Convener of the Black Women’s Roundtable (BWR), spoke of the urgency around creating safeguards and federal legislation to protect against the technology’s misuse as it relates to elections, democracy, and voter education, while fighting back against the increasing threats surrounding targeted misinformation and disinformation.

“AI has the potential to be a significant threat because of how rapidly it’s moving,” Campbell said. There was Russian targeting of Black men with misinformation in 2020 to encourage them not to vote. It started in 2016.”

Both civil rights leaders warned that misinformation driven by artificial intelligence may worsen considerably for African-American voters leading up to the 2024 presidential election.

“What we have seen though our work demonstrates how racial justice, voting rights, and technology are inextricably linked,” said Hewitt, president and executive director of The Lawyers’ Committee for Civil Rights Under Law during his testimony. “Voters of color already face disproportionate barriers to the ballot box that make it more difficult and more costly for them to vote without factoring in the large and growing cost of targeted mis- and disinformation on our communities.”

Hewitt said AI technologies could be used to refine and test data to generate targeted lists of voters based on the patterns, interests, and behaviors of specific individuals.

“Forget using zip codes as a proxy for race; the targeted lists of tomorrow will weaponize sophisticated machine learning technologies, using individual identities or behaviors to target Black voters with surgical precision, all in order to mislead and harm them,” he warned.

Campbell and Hewitt said that during recent election cycles, African Americans have been specifically targeted by disinformation campaigns.

“AI technology threatens to turn already fragile conditions for our democracy into a perfect storm,” Hewitt said. “The spread of misinformation and disinformation online to influence elections and disenfranchise voters, often specifically Black voters, is already commonplace. Communities of color who already sacrifice so much to cast a ballot and make our democracy work are increasingly subjected to new downsides of technological innovation without reaping the rewards.”

The pair referred to a lawsuit, NCBCP vs. Wohl, filed by the Lawyers’ Committee and involving NCBCP which was a plaintiff two men who targeted Black voters in New York, Pennsylvania and Ohio disinformation via robocalls in an effort to sway the outcome of the 2020 Elections.

“In the weeks before the 2020 Election, the Election Protection hotline received complaints from voters about robocalls using deceptive information to discourage people from voting. After investigating, we found that two individuals, Jack Burkman and Jacob Wohl, had sent 85,000 robocalls largely to Black Americans,” Hewitt said.

The goal was to discourage African Americans from voting by mail, lying that their personal information would be added to a public database used by law enforcement to execute warrants; to collect credit card debts; and by public health entities to force people to take mandatory vaccinations.

“These threats played upon systemic inequities likely to resonate with and intimidate Black Americans,” Hewitt said. “We filed a lawsuit, National Coalition on Black Civic Participation v. Wohl, in which a federal court issued a restraining order to stop the robocalls and later ruled that this conspiracy to silence Black voters was intimidating, threatening, and coercive in violation of the Voting Rights Act and Ku Klux Klan Act. The methods used for those deceptive robocalls in 2020 look primitive by 2023 standards. But they hold three important lessons for democracy when surveying the AI technology of today and tomorrow.”

Campbell concurred. She said AI would allow this type of weaponization to be more significant using texts, video and audio.

“AI increases the ability to do that in larger formats. We are trying to address this. Elections and democracy is really, really important,” she said. “So many places that can go. So much you can do online now. You have open source where just about anyone who wants to can use AI for nefarious means. There is a lot of angst with those doing voting rights and elections work.

You don’t know how bad it can be until you know how bad it’s been.”

Campbell and Hewitt agree that the exploding capabilities of AI technology can drastically multiply the amount of harm to American democracy. Campbell adds that Google, Microsoft and Meta are the front line companies who activists hope will step up and put guardrails in place because the 2024 elections is overwhelmed by AI-driven misinformation and disinformation.

“In malicious hands and absent strong regulation, AI can clone voices so that calls sound like trusted public figures, election officials, or even possibly friends and relatives,” said Hewitt. “In malicious hands and absent strong regulation, AI can clone voices so that calls sound like trusted public figures, election officials, or even possibly friends and relatives. The technology could reach targeted individuals across platforms, following up the AI call with targeted online advertisements, fake bot accounts seeking to follow them on social media, customized emails or WhatsApp messages, and carefully tailored memes.”

Hewitt said the technology could send messages reaching targeted individuals across several platforms. Then the messages would be followed up with AI calls, targeted online advertisements, fake bot accounts seeking to follow these people on social media, customized emails or WhatsApp messages, and carefully tailored memes.

During his testimony, Hewitt detailed five principals that should guide AI regulation and legislation to protect US democracy, including regulation of AI to protect Americans’ civil rights, by including an anti-discrimination provision directed at online contexts and algorithms; AI should be evaluated and assessed, both before and after deployment, for discrimination and bias; developers and those deploying AI should “have a duty of care” indicating that their products are safe and effective. And if not, they should be held liable.

AI regulation should include transparency and “explainability” requirements so people are made aware of when, how, and why AI is being used; using data protection requirements, to ensure that AI is not used to grab data from those who have not given their consent; and voter information should not be tied to private information to target voters without safeguards.

The effort being led by the Lawyers’ Committee and the NCBCP comes against the backdrop of similar alarm from the Biden administration, some lawmakers and AI experts who fear that AI will be weaponized to spread disinformation to heighten the distrust that significant numbers of Americans have towards the government and politicians.

President Joe Biden recently signed what’s described as “a sweeping executive order.” The order focuses on algorithmic bias, preserving privacy and regulation on the safety of frontier AI models. The executive order also encourages open development of AI technologies, innovations in AI security and building tools to improve security, according to the Snyk Blog.

Vice President Kamala Harris echoed others concerned about this issue who fear that malevolent actors misusing AI could upend democratic institutions and cause American’s confidence in democracy to plunge precipitously. In her remarks, Harris cited the need for a more expansive definition of AI safety to encompass the “full spectrum” of threats, embracing the spread of disinformation, discrimination and bias.

“When people around the world cannot discern fact from fiction because of a flood of AI-enabled disinformation and misinformation. I ask, ‘is that not existential?” Harris said in speech at a Nov 1 press conference at the 2023 AI Safety Summit in London, England. “For democracies, AI has to be in service of the public interest. We see the ways AI poses a threat to Americans every day, certainly in politics and we are laying the foundation for an international framework to regulate AI.

Harris concluded, “We’re going to do everything we can. This is one of the biggest concerns most people have.”

This article originally published in the November 20, 2023 print edition of The Louisiana Weekly newspaper.

Readers Comments (0)


You must be logged in to post a comment.