Filed Under:  National, Top News

Guidelines issued to end racial bias in artificial intelligence

3rd May 2021   ·   0 Comments

By Ryan Whirty
Contributing Writer

Advocates for fairness in applied technology have lauded the Federal Trade Commission’s recent efforts to ensure that companies’ use of artificial intelligence to make automated decisions doesn’t discriminate on the basis of race, ethnicity or gender.

David Brody, senior counsel and senior fellow for Privacy and Technology at the Lawyers’ Committee for Civil Rights Under Law, said many companies employ AI scanning and identification technologies that run on algorithms that are racially or otherwise biased, critical flaws that result from the inherent biases of the human programmers who create those AI algorithms.

It’s a problem that’s been heightened and exacerbated by the COVID-19 pandemic as medical facilities, insurance companies and other healthcare entities use models that have inherent biases to make medical decisions that often negatively affect the care received by people of color, a demographic already shown to suffer disproportionately from the coronavirus.

“These are systems built by people,” Brody told The Louisiana Weekly. “But people are fallible. They make mistakes.

“The system is only as good as we make it,” Brody added. “If the system isn’t diverse and inclusive, it could lead to blind spots later. The systems need diverse building teams for different perspectives” when programming algorithms.

As a result, Brody welcomed guidelines for equitable use of artificial intelligence programs by businesses that were published by the FTC on April 19. The FTC noted that a company might be violating federal law if it makes automated decisions by employing biased or prejudicial AI algorithms.

The federal guidelines, written by FTC attorney Elisa Jillson, are titled, “Aiming for truth, fairness, and equity in your company’s use of AI.” The policy statement asserted that AI programming carries great potential for positive advancement, but it also can have more detrimental impacts, especially during the COVID-19 era.

“Advances in artificial intelligence (AI) technology promise to revolutionize our approach to medicine, finance, business operations, media and more,” Jillson wrote. “But research has highlighted how apparently ‘neutral’ technology can produce troubling outcomes – including discrimination by race or other legally protected classes. For example, COVID-19 prediction models can help health systems combat the virus through efficient allocation of ICU beds, ventilators, and other resources. But as a recent study in the Journal of the American Medical Informatics Association suggests, if those models use data that reflect existing racial bias in healthcare delivery, AI that was meant to benefit all patients may worsen healthcare disparities for people of color.”

The FTC guidelines cite a study published in the January 2021 issue of the Journal of the American Medical Informatics Association entitled, “Bias at warp speed: how AI may contribute to the disparities gap in the time of COVID-19,” to support the Commission guidelines. The study specifically examines how people of color during the coronavirus pandemic are harmed by racially biased AI programming.

“… [T]he global research community, placing high hopes in artificial intelligence (AI), is rushing to push out new findings as quickly as possible, creating a veritable research deluge,” the study reported. “In this frenzy, the risk of producing biased prediction models due to unrepresentative datasets and other limitations during model development is higher than ever. If not properly addressed, propagating these biases under the mantle of AI has the potential to exaggerate the health disparities faced by minority populations already bearing the highest disease burden. We believe there is an urgent need to enforce the systematic use of reporting standards and broad sharing of code and data to address the challenges of bias in AI during the COVID-19 pandemic.”

The FTC statement notes that the Commission has used and can continue to use three federal laws specifically – Section 5 of the FTC Act, the Fair Credit Reporting Act and the Equal Credit Opportunity Act to safeguard against biased AI algorithms.

“Among other things, the FTC has used its expertise with these laws to report on big data analytics and machine learning; to conduct a hearing on algorithms, AI and predictive analytics; and to issue business guidance on AI and algorithms,” Jillson wrote. “This work – coupled with FTC enforcement actions – offers important lessons on using AI truthfully, fairly and equitably.”

The FTC specified several actions businesses can take to reduce or eliminate biased algorithms and disproportionately negative automated decisions, including starting with a solid technological foundation, consciously monitoring for discriminatory outcomes, embracing procedural transparency, not exaggerating the objectivity of their algorithms, being honest when evaluating data used in programming, maintain strict accountability, and following the principle of doing more good than harm.

Brody told The Louisiana Weekly that in addition to using AI programmers without biases or prejudices, companies also need to employ diverse, ethnically representative data when forming and using automated algorithms.

“They should look at a large amount of data and look for correlations and trends in the data,” he said. “The challenge is if the data businesses are using to train the algorithms has biases embedded in it, then these biases will make their way into those algorithms.”

He added that “the algorithm is going to learn to discriminate on race or sex or anything else in the data. You have to ask, ‘Is this data pure and authentic, or tainted by bias?’”

In addition to flawed decision-making in health care, Brody pointed to housing trends and the existence of racially prejudiced practices such as redlining that segregate neighborhoods and communities by drastically restricting where the poor and people of color can live.

“[Redlining] causes certain societal consequences [such as in] home ownership and passing on intergenerational wealth,” he said. “If you look at the data on where people live but don’t look at why that’s the case [in the data], The algorithm is going to reinforce redlining and segregation.”

He said such processes as using artificial intelligence to make automated decisions is one example of how advanced modern technology can become extremely consequential without society even realizing it. That, he added, is where the problems start.

“Technology is just like any other tool,” he said. “It can be neutral, but it can also be used for good or bad. Even if a technology is superficially neutral but it’s used to establish a system that is discriminatory and used to sustain that discrimination, then it can be used for the worst.”

This article originally published in the May 3, 2021 print edition of The Louisiana Weekly newspaper.

Readers Comments (0)


You must be logged in to post a comment.