Ideas & Insights / All Perspectives / Ideas & Insights

Taking a Human Rights Approach to AI Governance

Evan Tachovsky — Former Director & Lead Data Scientist, Innovation, The Rockefeller
Hunter Goldman — Former Director, Innovation, The Rockefeller Foundation
Woman coding on a laptop computer
Photo by Christina Morillo from Pexels.

In response to concern about the impact of AI on society companies and civil society organizations have raced to write ethical frameworks to guide the development and use of these powerful new tools. Researchers at ETH Zurich’s Health Ethics & Policy Lab found that over 80 distinct AI ethics frameworks have been proposed in just the last five years. While well-intentioned, these frameworks don’t scale, are often duplicative, and fail to provide a legal foundation for effective regulation.

In October 2019 as part of our AI Month at Bellagio, we partnered with machine learning startup Element AI and Mozilla Foundation to convene a workshop to evaluate a different approach: using human rights frameworks to govern AI.

AI governance grounded in human rights has a number of advantages. First, human rights—as codified in the Universal Declaration of Human Rights—provides an established, global construct that is widely acknowledged by governments, businesses, and civil society. Second, this framework offers a legal basis for more specific regulation in a way that ethical frameworks do not. And third, there are established procedures and norms for assessing the human rights impact of business operations and remediating harm.

Despite these theoretical advantages, there is little clarity about what a human rights approach to governing AI would look like in practice. At the workshop, a group of legal, technology, and policy experts from around the world wrestled with questions we need to answer in order to operationalize a human rights approach to governing AI. These included:

  • How do human rights frameworks intersect with existing national regulation or company-backed ethical frameworks? Where are these approaches complementary and where are they contradictory
  • What would human rights impact assessments look like for AI products?
  • Do regulatory bodies understand both AI and human rights frameworks sufficiently to develop smart regulation? If not, are there opportunities for education?

There is little clarity about what a human rights approach to governing AI would look like in practice.

Based on the discussion, Element AI developed a set of practical recommendations for governments and companies interested in using a human rights approach to govern AI. Their landmark report, released last week at the 2019 Internet Governance Forum, makes recommendations for the public and private sectors including:

  • Governments should make (1)  human rights impact assessments a requirement for public sector procurement of AI tools and (2) develop new Centers of Expertise to help regulators understand how AI works.
  • Companies should (1) integrate human rights impact assessments throughout the AI development lifecycle and (2) prioritize the development of explainable AI technology that helps us identify and end harm.
  • Investors should (1) seek out companies that build rights-respecting AI and (2) fund up-stream research on explainability and human-centered design for AI.

For more details, see the full report here.

At The Rockefeller Foundation, we seek to maximize the benefits of AI for society while mitigating the harm. Shifting from a patchwork of ethical AI frameworks toward coherent, human rights-based approach is an important step in this process. We are proud to work with Element AI and Mozilla Foundation to advance this work.

Sample Quote

  • Sample quote
    John Doe
    CEO, Microsoft

follow up rich text

Leave a comment