Image @Google
The United Nations is moving to confront the growing problem of racial bias in artificial intelligence (AI), warning that flawed AI systems are deepening discrimination against people of African descent worldwide.
Speaking in a video message at the opening of a UN session on AI and human rights, UN High Commissioner for Human Rights Volker Türk said that “solutions to our greatest challenges lie in more unity and greater respect for human rights, not less.”
The session comes amid growing concern from researchers and advocates that AI systems are replicating and amplifying long-standing biases against African-descended populations.
One of the leading voices on the subject is data scientist Russell Lancaster. His recent paper, Artificial Intelligence—From Bias to Opportunities: The Rise of AI and Its Impact on the African Diaspora, warns that while AI has vast potential for economic empowerment, social mobility, and cultural preservation, it also poses significant risks.
“AI systems can be biased and hence reduce these economic opportunities,” Lancaster writes. Biased credit scoring algorithms, for example, “may systematically reject more loans to underprivileged communities.”
Lancaster highlights how the lending platform Upstart used the IBM AI Fairness 360 Toolkit to mitigate such bias, leading to “27% more approvals and 16% lower average interest rates for underserved populations.”
In the healthcare sector, tools such as Microsoft’s InterpretML have been used to improve transparency and fairness in AI decision-making.
When the healthcare company Novartis applied InterpretML, it “made predictions of patients’ risk more accurate and reduced disparities.”
Facial recognition technologies are another area of deep concern. A study by MIT Media Lab found that commercial facial recognition software can be off by “34.7% on darker-skinned women and by 0.8% for lighter-skinned males.”
Such disparities are alarming in law enforcement, where AI is increasingly used for predictive policing and face recognition technologies that, according to Lancaster, “are directly discriminating against minority communities.”
The controversy surrounding prominent AI ethics researcher Timnit Gebru helped bring these issues to global attention. Gebru co-authored the Gender Shades study, which demonstrated “racial and gender biases in facial recognition technologies.”
In 2020, Google fired Gebru after she raised concerns about the risks of large AI language models, particularly regarding “environmental costs, biases, and potential harm to underprivileged communities.”
The move sparked protests from thousands of Google employees and academics calling for greater transparency and ethical responsibility in AI development.
After leaving Google, Gebru co-founded the Distributed AI Research Institute (DAIR), an independent organization conducting AI research that centers on marginalized communities.
DAIR’s mission is to “provide an independent, community-centered space for AI research that is not biased by Big Tech.”
Lancaster’s paper also points to promising technical advances. Microsoft improved facial recognition performance for women, lowering error rates “from 20.8% to 0.8%, and in one test, to 0.3%.”
The International Business Machines Corporation’s (IBM) Diversity in Faces dataset has also helped reduce racial bias in facial recognition systems.
Policy efforts are also gaining momentum. The proposed U.S. Algorithmic Accountability Act would require companies to conduct impact assessments to identify and address biases in their AI systems.
The hiring platform HireVue underwent an external audit of its AI-driven assessments “which helped increase transparency and trust.”
AI tools such as Google’s What-If Tool and Microsoft’s InterpretML are helping developers visualize and correct bias in AI models.
But technical fixes alone will not solve the problem. “It takes diverse teams, inclusive datasets, and transparent AI systems to create the building blocks of a more just future,” Lancaster writes.
Lancaster also notes the need for greater representation of African-descended communities in AI development.
After leaving Google, Gebru also co-founded Black in AI, a project focused on increasing diversity in the field.
Diverse AI teams, Lancaster reports, lead to “higher innovation and lower biases in AI systems.” A McKinsey report found that “companies with more diverse executive teams are 33% more likely to outperform on profitability compared to peers.”
Dr. Rediet Abebe, quoted in Lancaster’s paper, stressed the broader value of diversity in AI: “Diversity in AI is not just about fairness; it’s about bringing in different perspectives that can lead to innovative solutions benefiting everyone.”
Lancaster’s paper stresses that “AI bias has far-reaching and deep implications on marginalized societies,” particularly when used in high-stakes areas such as hiring, healthcare, credit, and policing.
The UN’s focus on this issue reflects growing global recognition that unchecked AI could exacerbate racial inequities.
The panel’s findings will inform future UN recommendations on AI and racial justice.
“The corresponding mitigation of challenges must be deployed through appropriate responses that biases and inequalities in AI can reinforce,” Lancaster concludes.
As AI systems become more embedded in daily life, advocates say the need for accountability, transparency, and inclusive development has never been more urgent.
In Türk’s words: “Solutions to our greatest challenges lie in more unity and greater respect for human rights, not less.”
By: Joshua Narh