Bellagio Conversations in AI/

Vilas Dhar on Transforming Power through AI

In his position as President of the Patrick J. McGovern Foundation, Vilas Dhar works with policy leaders, civil society organizations, and technologists to create new regulatory and community-level infrastructure for a human-centered AI future.

Vilas’ Bellagio Reflections: “My work became so much more prosperous and fruitful thanks to the community at Bellagio. What had been my individual pursuit became one part of a global movement for a more just, AI-enabled future that involves people across the planet.

“I realized, as we talked, that we need to consider the full breadth of human experiences and perspectives – of corporate AI researchers, policymakers, rights-based practitioners, and even whole communities. Those conversations helped me see that this moment requires broad representation and leadership from diverse global viewpoints, and that realization became the core of my work post-Bellagio.”

Here, Vilas asks how we can prevent AI from cementing current global inequities, and explores how AI will transform power in a society whose civic institutions were built for a pre-AI world.

I’ve spent most of my life in a place of tension between two foundational ideas. The first is that technology can make the world a better place; it can inspire economic opportunity, build new centers of power, let people harness their creativity and find connection, and spread compassion and joy. The second is that, despite this potential, we still live in a world where almost 3 billion people are offline, and many lack access to the benefits of connected technologies.

By 2019, when I came to Bellagio, I’d seen the beginnings of a major transformation of systemic values through AI and data science. Since then, there has been a monumental – and ongoing – transformation of power too. Historically, power belonged to those who held capital – controlling capital meant controlling technology – but now we can democratize not just access to, but also ownership of, these new technologies. And by “ownership,” I mean allowing people everywhere – from my grandfather’s village in India to the rural Illinois town where I grew up – to not only develop and use these technologies, but also influence how they’re governed.

I asked myself: What can a civil society institution or nonprofit do to leverage this moment in which technology is transforming both power and vulnerability? What can we do to build a more equitable, community-led future? While we work towards that goal at the foundation, our broader work is the pursuit of human dignity and community empowerment through technological and societal transformation.

There were two things that I sought from my Bellagio experience. I wanted to tackle this question of whether tech-driven societal transformation was happening. If so, how we could shape it in line with our shared human values? I believed it was a critical moment for humanity with regards to AI, and my cohort helped me develop confidence in my hypothesis, which I was able to validate and extend. I was then able to define specific, desired goals. The first is that we can’t only involve tech-focused foundations; we have to include the civil society organizations working on the front lines that need to understand how AI will transform their fields. This has guided how my foundation allocates capital and builds partnerships. The second is that money alone can’t solve the problem; we need to build more capacity to support the use of data and AI in the civil sector. In part, as AI literacy becomes a defining skill for the future, that means challenging a status quo that accepts the digital divide as inevitable. We’ve built a team of data scientists and AI professionals who work directly with nonprofits and the communities they support.

  • AI will augment human interests in ways that we can’t even envision yet, nor have the language to talk about.
    Vilas Dhar
    President and Trustee of the Patrick J. McGovern Foundation

However, I fear that it will also cement long-standing inequalities. Many of us will find its development and infrastructure too complex and disconnected from our daily lives, and will refrain from public discourse and decision-making altogether. Meanwhile, those who already have power over AI – and our wider society – will monopolize it. When that happens, we’ll have a real problem as a society – and as a planet.

How do we prevent this from happening? We can start by making sure that people have agency to shape how these technologies are created and deployed. This means building representative workforces, with members from underrepresented and marginalized communities, who can create these technologies, not just use them. But not everyone has to be an AI expert. Rather, we need a broad spread of digital literacy so that every person on the planet understands how AI affects our lives, what kinds of opportunities are available, and how to access support. This work includes creating a shared language to talk about AI and digital transformation, and investing in building the resources and technologies that serve those public needs.

Our leaders must also develop awareness of these key issues as they draft the ethical frameworks of AI governance. I’m a realist, in the sense that I believe our existing, default behaviors aren’t going to get us where we need to be. Instead, we need to create entirely new intentions and mechanisms for creating and regulating these technologies. Those in power need to be properly equipped to make the right short-term policies; at the foundation, we invest in educating lawmakers, congresspeople, and international multi-stakeholder groups like UNESCO and the OECD. We also work very closely with several companies to promulgate new principles on responsible AI, so that they’re already starting from a place of ethical design.

In short, every nation, sector, industry, and individual has a role to play in shaping our shared AI future. Many of our institutions, practices, and government models today were born in a world without AI. Instead of trying to predict every way that AI might generate new existential risks, we need to re-envision governance in the 21st century, to tackle the challenges while still harnessing the opportunities. We need to make courageous, intentional decisions around what kind of world we want to live in today, and in the future.

Our current era is one of significant transformation – of power, of governance, of community, and of our day-to-day lives. To alter its trajectory towards human dignity and equity, we need aligned, collective action. Yet the biggest story of all is how urgently this needs to happen. Five years ago we weren’t talking about AI; today, it’s all we talk about. In another five years, AI might be the dominant technological paradigm. We need to develop a robust plan to hold these tools, and their creators, to
account – quickly.


Explore more

Vilas Dhar is President and Trustee of the Patrick J. McGovern Foundation, which bridges the advancements of AI, data science, and social impact. He serves as the U.S. Government’s Nominated Expert to the Global Partnership on AI; on the Global Futures Council on AI at the World Economic Forum; as a member of the advisory council at the Stanford Institute for Human-Centered Artificial Intelligence (HAI); on the boards of directors of AccessLex and the Network of Engaged International Donors; and he is a trustee of the Christensen Fund. Vilas participated in the Bellagio convening “AI + Philanthropy: Better Giving in the Data Age” in 2019 and attended the convening “Towards a Theory of AI Practice” in 2022.

To find out more about Vilas’ work, you can explore the Patrick J. McGovern Foundation or follow him on Twitter.

Related