Ideas & Insights / All Perspectives / Ideas & Insights

Putting the Needs of Vulnerable Populations First: Collaborating to Address AI Bias

Evan Tachovsky — Former Director & Lead Data Scientist, Innovation, The Rockefeller Foundation
Terrell Seabrooks — Former Program Associate, Innovation, The Rockefeller Foundation

Artificial intelligence, or AI, and machine learning is used in myriad ways across the public and private sectors. It can serve as a tool to solve a wide range of societal problems, such as preventing homelessness, improving agricultural capacity, or combating pathogens. However, a critical challenge facing these tools is AI bias, an issue that can lead to discriminatory outcomes and disparity for poor and vulnerable communities. To combat the problem, The Rockefeller Foundation has established collaborations with Black in AI, Lacuna Fund, and Distributed AI Research Institute (DAIR), a new interdisciplinary AI research institute that seeks to proactively mitigate the harms that can arise in the production and deployment of AI tools and practices.

In collaboration with Black in AI, DAIR, and Lacuna Fund, The Rockefeller Foundation seeks to:

  • Expand research opportunities to allow diverse groups to re-evaluate data for bias;
  • Increase visibility and entrepreneurship of underrepresented groups in data science fields;
  • Support data accessibility and tools that benefit populations most impacted by AI bias;
  • Provide resources to data scientists in underserved communities globally to create, expand, and maintain more equitable datasets for machine learning;
  • Build communities and provide resources to Black scholars and engineers who face systemic exclusion from AI institutions.

What is AI Bias?

Two of the most important biases in AI are data bias and societal bias. Data bias is when an algorithm is mining biased data. Societal bias is when societal norms cause blind spots in our thinking. Data scientists review variables of assumptions in all datasets as part of their work. Societal biases are variables that can be overlooked because of society norms, gender norms, or culture. These erroneous assumptions can lead to discriminatory outcomes and exacerbate disparities.

Current advances in the quantitative methods that underlie AI technology generally translate into more positive impacts for wealthy communities and disproportionately negative impacts on poor and vulnerable communities. Wealthier communities benefit because they are better represented in the data. Wealthier communities are more likely to appear in common data repositories – for example, credit card lists and health insurance data. AI tools are designed to offer solutions for populations within the data, which leaves poor and vulnerable communities excluded from the benefits of AI tools.

Who Benefits from AI and How to Remediate Impact on the Underserved?

This issue cuts across a wide range of problems such as preventing homelessness and improving agricultural yields. The individual impact ranges from negatively affecting credit ratings, impacting residential leasing, or anything that uses algorithms to determine access and eligibility. The result is that poor communities may have less access to programs and services of potential benefit.

Establishing a New Research Institute that Puts Vulnerable Populations First

With support from The Rockefeller Foundation, the Distributed AI Research Institute (DAIR) is leading the charge against algorithmic bias. Founded as an independent community-rooted research space, DAIR is committed to producing interdisciplinary research on AI. Its research is rooted in the belief that AI is not inevitable, its harms are preventable, and when its production and deployment include diverse perspectives and deliberate processes AI can be beneficial.

Biased data in AI tools and machine learning is an issue many organizations are confronting. To address the problem, research teams set out to reevaluate data and assess for the influence of bias. But a common concern is a diversity gap in some AI research teams. Another pitfall is when an outside influence, like a big technology company, influences members into group think or a top-down mentality.

DAIR seeks to mitigate this diversity gap through its core objectives:

  • foster research that analyzes its end goal and potential risks and harms from the start;
  • ensure its research pool includes members from many different backgrounds who can participate while embedded in their communities;
  • communicate the impact of members’ work to impacted communities in straightforward and practical terms, rather than exclusively through academic research papers.

The DAIR Institute is being led by Dr.Timnit Gebru. Dr. Gebru has a track record of building impactful institutions serving Black people around the world, such as Black in AI and AddisCoder.

The next generation of AI leaders needs to be more diverse. While mitigating bias and reducing its harms  are critical  to address, it’s equally important to strengthen the capacity of those individuals building the tools by increasing the diversity of practitioners within the AI space. The Rockefeller Foundation is supporting the creation of a new network of Black scholars and engineers who are subject to systemic exclusion in existing AI institutions. Supporting the growth of underrepresented populations in AI is not only an important step for AI equity, but also an important step for all the impacted fields of study: food security, poverty, education, climate change, and more.

Removing Barriers & Supporting AI Innovators

With support from The Rockefeller Foundation, Black in AI is supporting AI innovators by focusing on removing barriers for underrepresented groups. Its primary workstreams include:

  • providing aid in professionalization and preparation for the workforce;
  • supporting programmatic work to expand opportunity and access to the study of artificial intelligence for underrepresented groups, such as women and people of color;
  • catalyzing an Entrepreneurship Program to help launch diverse, innovative entrepreneurs in the fields of Artificial Intelligence and Machine Learning.

Black in AI is a rapidly expanding start-up organization. What started a few years ago as an email discussion among a few individuals has grown into a global movement of up to 5,000 members in more than 50 countries. This rapid growth has allowed them to have an immediate and tangible impact on young professionals around the world. In 2021, Black in AI mentored 95 students during the graduate admissions cycle and provided support to approximately 250 students.

Closing Data Gaps and Expanding Accessibility

Companies, governments, universities, and civil society organizations acquire private and public datasets for researchers to use to train and build AI tools. However, in many cases, these datasets are missing key information or are not representative of affected populations, leading to biases and decreased accuracy. And globally, data scientists in countries with less infrastructure face greater barriers to access complete and representative training datasets.

The Rockefeller Foundation is supporting Lacuna Fund as it seeks to fill these gaps. A collaborative effort to fund data for social impact around the world, Lacuna Fund provides resources to data scientists, researchers, and social entrepreneurs in low- and middle-income contexts to create new datasets or expand existing datasets in the domains of agriculture, language, health, and climate. This support allows data scientists  to address urgent problems in their communities.

The impact of filling these data gaps has shown great potential to revolutionize a range of improvements like:

The success of the Fund’s initiatives not only highlights the power of inclusive data, but also illustrates how data scientists are beginning to reduce AI bias and make the advances offered by AI accessible to all.

Black in AI, DAIR, and Lacuna Fund are only three organizations doing the important work of integrating ethical and equitable practices into society’s quest to adopt new and innovative technologies. They boldly confront inequities in the status quo while helping to usher in a new generation of young professionals who have access to quality tools and datasets that will be more representative of society as a whole. Their work is critical to ensuring that in years to come, the benefits of AI are reaped by all rather than just some.