From the Archives/

Designing a Future for AI in Society: The AI+1 Report

In 2019, The Rockefeller Foundation convened a group of inspiring leaders to discuss the ethics of Artificial Intelligence (AI). The Foundation was ahead of the curve in mapping the implications of AI’s rapid development on society. Many of the conveners left the gathering with new perspectives, connections, and ideas, and they returned to their respective sectors – data science, policy, law, education, and health – energized. The convening also produced a report (AI+1) that highlighted the importance of prioritizing the challenges of AI governance and the AI/Human relationship.

Covid-19 hit only a few months after the convening, and priorities around the world quickly changed. Data scientists pivoted towards facilitating solutions to problems such as access to benefits and emergency services. As Perry Hewitt,’s Chief Marketing and Product Officer, tells The Rockefeller Foundation: “I think the pandemic showed us the power of data and AI through an optimistic lens. For example, we used AI in the design of vaccines – specifically to understand and accelerate the analysis of the shapes of the molecules that are the medicines themselves.” This means that the next time there is a pandemic, researchers have a head-start on developing further vaccines. “We have the infrastructure there,” Hewitt continues. “A ready, free, open-source process that is supported so that every country in the world has access to it.” 

In this season’s bulletin, we’ve explored the ethics of AI with participants of the 2019 convening, including Ronaldo Lemos, Mary Gray, Tim O’Reilly, Danil Mikhailov, and Marietje Schaake, giving us an opportunity to revisit the topic and explore new developments. As ever, we discover what themes, ideas, concerns, and hopes have occupied attendees’ work in AI and beyond, both during and since their time at Bellagio. In particular, we discuss the emergence of large language models and the opportunities for a fairer future that this new technology presents. With AI investments dominating the tech world during the first half of 2023, many leaders tell us there has never been greater urgency around this subject. 

 “Before releasing AI models to the public, they need to be tested and considered through the lens of how they impact people’s lives, practices, and communities,” entrepreneur and human rights advocate Vilas Dhar, the President of the Patrick J. McGovern Foundation explains. “If AI is going to be the dominant technological paradigm of the future, we need to move at full speed now to think about how we regulate these tools.”

Prof. Gillian Hadfield of the University of Toronto compares AI to other everyday tools, such as cars, that are indispensable to our lives but carry recognized dangers, and are therefore regulated appropriately. “We need to put the infrastructure in place that gives us a lever that can be pulled if we discover bad behavior, and want to conduct a safety investigation,” she says. Hadfield proposes creating national registry offices to undertake this task.

Globally, reactions to AI’s rapid development range from euphoric embrace of the freedom it will, purportedly, give us, to skepticism and even fear of what might ensue without proper regulatory frameworks (though some countries are already drawing up AI regulations, none have been adopted yet). One thing is clear: more dialogue is needed, and with that a set of carefully considered actions. Such actions would enable this rapidly developing technology to be deployed in ways that drive our collective goal of making opportunity universal and sustainable.

Read the AI+1 report here.