Bellagio Conversations in AI/

Ronaldo Lemos on the Value of Divergent Approaches to Governing AI

Governments and global bodies around the world are exploring how to regulate and encourage the development of AI. From his position as director of the Institute for Technology & Society of Rio de Janeiro, Ronaldo Lemos has seen competing legal visions begin to form. He believes that a multipolar path forward will be the most productive for all.

Ronaldo’s Bellagio Reflections: “I loved being at Bellagio because it’s the ultimate anti-distraction place in the world. We’re in an era where disconnection is no longer an option – we’re usually so completely overwhelmed with messages and people and machines trying to capture our attention – but it’s a place in which you can regain control over your time.

“It makes me think of the Bill Gates documentary, Inside Bill’s Brain: Decoding Bill Gates. He barely touches a cell phone; he’s never with a computer. He goes around with a basket full of paper books. To be disconnected these days, you need to be a billionaire. We’re not Bill Gates, but Bellagio, in a sense, permits us to become Bill Gates for just a little while, and we can use that opportunity to think deeper and to interact with other human beings”

Here, Ronaldo discusses his belief that AI must reflect the diversity of human intelligence, culture, and society, both in its development and its regulation.

My perspective on AI and large language models (LLMs) comes from the developing world, and my big concern is what the impact of these technologies will be on the global majority. I fear that when we think about jobs being lost, people having to reskill, and a decrease in the value of labor, those impacts will hit the developing world first, rather than more developed countries. It won’t be workers in New York or Chicago or San Francisco who will be displaced first; it’ll be workers in São Paulo, Nairobi, Maputo, and Kerala.

My second big concern is that there’s this idea of “artificial intelligence,” but what is intelligence? Are we settling on one specific type of human intelligence? What about the diversity that we see in terms of how human beings approach the world and choose forms of living? I immediately think about the original indigenous nations of Brazil, where I come from. There are so many of them, each with different ideas about what intelligence is. Those perspectives are not derived from human beings alone, but from nature itself, and they include concepts such as the protection of the environment. I fear that a lot of people think about this idea of a “singularity” in terms of artificial intelligence becoming self-conscious, and a kind of general intelligence emerging. While I think this is an interesting debate, I’m worried about another type of singularity: one form of intelligence that becomes the standard for all of us.

In my view, this is problematic. What I would like to see is not a singularity, but a multiplicity of ideas on intelligence.

  • If we settle for one specific form of intelligence, we lose so much. We will be neglecting so many other possibilities, and so many other ways of being in the world, that will all be completely ignored by this one-size-fits-all artificial intelligence.
    Ronaldo Lemos
    Lawyer, Director of the Institute for Technology & Society of Rio de Janeiro, Professor at Rio de Janeiro State University's Law School

My perspective on this issue is global. I live and work in Brazil, but I also teach at Columbia University in New York and Schwarzman College in Beijing. From the perspective of the U.S., there have been some interesting debates about this idea of multiplicity, with some interesting scholars proposing AI that is very different from the kind we talk about when we talk about ChatGPT, let alone any that take into consideration, for instance, ideas of nature, or art, or religious experience.

Meanwhile, China recently pioneered a set of rules for governing LLMs and generative AI models, and the foundations for how AI models should be built are different from the West. That, to me, was intriguing because AI is not only a technological problem, it’s also an institutional problem, and how it’s going to be developed has as much to do with technology as it does with how human institutions will regulate, embrace, or curb it. It’s illuminating to see how different places are applying these different perspectives to AI, and I’d like to see more of that variety because I think we’re in a moment in which experimentation is crucial. We have to see what works and what doesn’t, and we have to keep an eye on each and every place that’s proposing any actions regarding AI.

My deepest hope is that these new AI models will expand opportunities for participation in economic and social development. AI can democratize skills and professions that are not currently accessible to many people. However, it might reduce the number of jobs and lower average salaries, increasing inequality, and concentrating capital – perhaps to unprecedented levels. It’s also likely that soon most of the content we’ll consume and access from other people will be produced by AI. Maybe in a year or two, 90% of the emails you receive will be drafted by AI, and maybe 95% of the Instagram posts we see will be made by AI. I’ve already seen subcultures online where people get together and watch AI video content that they can interact with, just as intently as they were watching TV a few years ago, or TikTok now. It’s an interesting metaphor for the future because I think we might be about to be hypnotized by AI. We will be inundated by AI-produced content, and we will be the conduits through which this content flows.

In all this, I think the public sector has two roles. The first is to come up with wise regulations. Second, it has to provoke a broader response to the issue of AI. There’s very little that nation-states can do alone, which we’ve seen with other global issues such as taxation and the problem of disinformation.

AI is still very new, and we haven’t seen yet how different kinds of institutions will embrace this change. What we have so far is only anecdotal, or isolated experiments. We can discuss reliability and accuracy, but I’m not sure if we’ll still fall short of what we need in the ethical debate. After all, an ethical AI could still mean you lose your job. What will we do if, even after ensuring AI is extremely ethical in its actions, its wider social impact is to decrease the value of human work? Should we start thinking about universal basic income, reskilling, and investing in the education of entirely new professions? These are all issues for which we don’t have a global standard.

What I would like to see, and start building, is a model for how we transform institutions so that they’re prepared to use these tools to promote equality. The institutional response will be what will matter in the end, as it will determine who benefits from these new technologies.

Explore more

Ronaldo Lemos is a lawyer specializing in technology, intellectual property, media and public policy, with 20 years of experience in the private and public sectors. He is the director of the Institute for Technology & Society of Rio de Janeiro and a professor at the Rio de Janeiro State University‘s Law School. He has sat on Meta’s Oversight Board since May 2020. He writes and hosts the documentary series Expresso Futuro, which covers technology issues in the developing world, including Africa, Brazil, and China. He attended Bellagio for a convening in 2017 titled “Located Internet of Things,” and returned for another convening in 2019 titled “Designing a Future for AI in Society.”

You can follow him on Twitter and watch Expresso Future on YouTube.