EditorialWithout DEI, Can AI Still Be Fair?
One of the ongoing issues with artificial intelligence has been that of detecting and mitigating biases introduced through the data used to train it.
Examples include recidivism calculators that were biased against African Americans, credit card applications that gave gave women lower credit limits than comparable men and even image generators that, in an attempt to be more diverse, created images of Black Nazis.
At the same time, in the United States, a series of judicial and political actions has led to corporations and government agencies ending programs associated with diversity, equity and inclusion (DEI). In addition, there’s been a push by some to reduce regulatory limitations on AI, such as President Donald Trump repealing on January 28 an October 2023 executive order from former President Joe Biden.
Called the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, Biden’s executive order had called for the reduction of bias in AI systems. It was one of 78 executive orders Trump said he repealed because of their role in DEI.
In addition, Vice President J.D. Vance spoke at the Paris AI Summit on February 11, saying that Trump’s administration planned to eliminate “excessive regulation” of the AI industry, and that in particular “American AI must be free from ideological bias.” Trump’s AI Action Plan, which is intended to replace Biden’s executive order, has not yet been released, but has received more than 10,000 public comments, according to the White House. It isn’t clear what degree of bias mitigation will be included, or even allowed, by the plan.
The Death of DEI
Even before Trump’s inauguration, the Supreme Court had ruled in June 2023 that race could not be used by universities to diversify their student bodies. That ruling was followed two weeks later by a letter from 13 state Attorneys General. “We write to remind you of your obligations as an employer under federal and state law to refrain from discriminating on the basis of race, whether under the label of ‘diversity, equity, and inclusion’ or otherwise,” the letter began.
The result was that a number of major companies, including Amazon, Google and Meta, ended or pulled back on DEI programs. Moreover, companies are concerned about doing anything that could be associated with DEI, which Christina Blacken, founder and CEO of The New Quo, said “is now a slur.” “People feel afraid to do that work even though it’s really important."
That said, some companies are coming back to supporting DEI, though they may not call it that, said June Christian, former inclusion & diversity manager at Starbucks, who has written on the relationship between DEI and AI. “‘We don’t care what you call it, DEI or belonging, we need it,’” she reports companies telling her.
Could AI Become More Biased?
“We believe something automated is more accurate even when it’s not."
— Christina Blacken
Founder and CEO, The New Quo
But the overall trend, some fear, could be a loss of government research funding and corporate initiatives that would have helped mitigate AI bias, perpetuating social inequity and potentially leading to inaccurate results. If developers are not being intentional, bias “is going to get wrapped into technology,” Christian warned.
People have “automation bias,” Blacken agreed. “We believe something automated is more accurate even when it’s not. But if they run with it, the information is wrong and full of stereotypes.”
According to research by the New York Times, the National Institute of Health alone has ended or delayed awarding nearly 2,500 grants. “The agency scoured grants for key words and phrases like ‘transgender,’ ‘misinformation,’ ‘vaccine hesitancy’ and ‘equity,’ ending those focused on certain topics or populations, according to a current N.I.H. program officer, who asked not to be identified for fear of retribution,” the Times wrote.
“There have been a lot of grants that have been cut short because of the words ‘bias’ and ‘diversity’ in their title,” said Leo Anthony Celi, a senior research scientist at the Massachusetts Institute of Technology. “Proposals were automatically rejected when those trigger words appeared. I don’t think we need to look hard for those examples. We ourselves have had grants frozen, still in limbo, because of the crackdown on proposals that might be triggering these ideologies.”
For example, Celi, who is also a medical doctor, said he has had one grant frozen that was teaching health AI to inner-city students. “We haven’t received any funding since November,” he said. “They haven’t told us it’s going to be cut short, but at the same time, they’re not releasing it, either.” Organizations taking part in the program are chipping in to continue it on a shoestring budget so that it doesn’t lose the momentum it has gained over the past three years, he said.
In February, MIT, along with a number of other educational institutions, filed suit in federal court to stop some grant cuts from taking effect.
In the meantime, Celi is attempting to look on the bright side. “The silver lining of this is pushing us to be more creative and imaginative,” he said. For example, he’s working on one project in Brazil that will help AI communicate better in Portuguese rather than requiring people to use English, and which is using local funding. “It’s important so they see it as their project, not an MIT project,” he said. “If they’re the ones responsible for securing the funds and designing the program, it becomes very personal if that fails.”
In the Absence of Regulation, Will Business Step Up?
Meanwhile, researchers and experts are hoping that grant-making organizations and corporations will be persuaded by pragmatism and market forces to continue efforts to reduce bias in AI.
Simpler Media Group
“One of my former students works for Anthropic, and he tells me that ‘what gets measured is what gets improved,’” said David Banks, a statistics professor at Duke University. “If Claude lags GPT-4 in writing Python code, the Anthropic people torque it up for Python. If DALL-E generates images of Black Nazis, then programmers get busy implementing guardrails. Currently, and I think for the foreseeable future, the ‘regulation’ that generative AI companies receive will come from each other and from the marketplace. I don’t think legislation or executive orders are nimble enough to be relevant.”
“If the goal is to use AI to make decision-making more accurate, in theory eliminating DEI programs should not matter,” said Christopher Slobogin, director of the criminal justice program at Vanderbilt Law School. “If an AI program is wrong about the base rates for credit risks, violence or lack of job qualifications for people of color more often than for white people, AI programmers should work to correct that problem — not out of concern about DEI, but because the program is technologically flawed. However, in practice, ending DEI programs may create an atmosphere in which developers do not make fixing this type of inaccuracy a priority. Even today, it is not always a priority.”
In the end, said Celi, what we want is a good algorithm.
“A good algorithm has to be accurate for the people it’s written for. Rather than framing this as a way of wealth redistribution, which is what the pushback is about, we say ‘it’s about a good product, good science,’ and try to decouple that from this notion of ‘this is looking for bias for the sake of looking for bias.’ We’re looking for the best product we can have leveraging AI technology. We don’t think people will say ‘We don’t want that.’”