Bellagio Conversations in AI/

Gillian Hadfield on the Democratic Deficit
in AI

The pace of AI development is rapid and disorienting – for both the public and those responsible for regulating it. Gillian Hadfield is the director of the Schwartz Reisman Institute for Technology and Society, and her perspective, grounded in years of exploring democratic technology governance in a globalized world, is that the legal response to AI must be conducted with equal urgency and speed.

Gillian’s Bellagio Reflections: “So much has been happening so fast in this domain, and the amazing part of Bellagio was the capacity to think, read, and collect the things that I had been writing off the side of my desk for a few years. I was revising a paper about a proposal on a new approach for governing AI, and with the opportunity to do that kind of thinking it became a different paper.

“Everybody there was deeply connected to, and constantly thinking about, what they wanted to contribute to the betterment of mankind. That question kept coming up: ‘Is this where my highest and best value to the world is?’ It was quite wonderful to be among a group of people thinking like that.”

Here, Gillian discusses the democratic deficit in the process of deciding how AI gets built, who gets to build it, and how government institutions should respond.

We’ve invented a technology that is likely to remake the way we do everything – the way we interact, the way our economies run, and the way our politics and our regulatory systems operate. For myself and others in my field, we need to think about how we manage the governance challenge of introducing safe, responsible AI into society in a way that encourages all its possible benefits, while also, in the process, not breaking other key social capabilities like regulating the economy, achieving equality, or holding democratic elections. We must recognize just how transformative this moment of rapidly developing technology is, but also how demanding it is of our leaders, who have to be creative, purposeful, intentional – and fast – in acting now to design the regulatory infrastructure we need.

In a piece I wrote for Microsoft Unlocked, I used the analogy of a steep, narrow hiking path called Striding Edge in England’s Lake District that my dad liked to hike. We’re headed to a peak with AI, but it’s a narrow ridge, and in conditions of high winds or low visibility we could fall off the side. It could be a tool that’s used by only the powerful to oppress those without power. It could break our political and economic systems. I think we must recognize that we are in an extraordinary moment in history, having invented another form of intelligence that can act autonomously. The future is, of course, hard to predict, but I believe that what we must do now is try to predict human behavior. We have to ask ourselves: “How will we humans respond? How will our politics respond? How will our regulatory systems respond?”

Right now, there’s exciting work happening in what’s known as “multi-agent reinforcement learning.” Essentially, it’s simulating AI “agents” in human-like environments. Imagine a social dilemma that humans struggle with, like making sure we don’t overfish rivers, or cleaning up pollution – activities that impact public welfare, but where people prefer to spend their time doing something else. Over the last five years, we’ve seen teams use these models to study those kinds of problems and teach us how we might think about new solutions. As humans, we have certain processes – meetings, religions, legislatures, and courts – that we collectively use to define and agree the rules for our societies, and my lab has been researching how to simulate those specifically. Extending that framework to AI, within the wider world of AI governance, is one of the things I find most exciting about this line of research.

However, new developments with large language models (LLMs) have changed the AI conversation. There’s an increased urgency, and we’re starting to see much more creative thinking and attention applied to the challenges of AI, but that’s because the scale and speed of those challenges have become so much greater.

One of those changes is how we tend to frame this as being about “ethics.” People I speak to have tended to focus on specific process issues like training data scientists in ethical behavior, or getting companies to sign up to codes of conduct. However, it’s my opinion as a lawyer, legal scholar, and economist that we need to be talking about regulation first and foremost. 

  • We need to be talking about rules, and those rules will inevitably be informed by our ethics.
    Gillian Hadfield
    Professor of Law and Strategic Management; Schwartz Reisman Chair in Technology and Society at the University of Toronto; Director of the Schwartz Reisman Institute for Technology and Society

At a minimum, we should have national registry offices so that we know who is building a powerful model at any moment. Historically, scientific advances usually took place in public sector organizations like universities, in corporate R&D departments, or through government research, and the science behind those advances was published. Everybody could study new chemistry or new physics. But many modern technologies like these AI models are being built almost entirely in secret inside private technology companies that are protected by a wall of intellectual property law – which we created.

We have to have a registration scheme that requires disclosure of that kind of research. We could require that any company that wants to do business within a country needs to be registered. We already have a registration system for cars, after all, because we recognize their dangers. We need to put the infrastructure in place that gives us a lever to pull if we discover bad behavior and want to conduct a safety investigation. Based on what we find, we could then make a decision like, “Don’t sell that technology to bad actor X.” Once that system is in place, there are interesting discussions to be had around the licensing and testing regimes that make sure models are safe at the
point of production.

The reality is that AI is already being governed right now, but only inside the companies developing it. That’s not democratic, and our democratic institutions should be the ones making the choices about what paths we take as humans. I would call for some very dramatic rethinking about the roles of corporations and governments in society. We still want to harness the profit motive, but we’re watching a world being built by corporate actors, with engineers operating under those incentives. I don’t criticize them for that, and I certainly don’t think we can solve it by just wagging our finger and saying, “Be more ethical.” Instead, we must restructure our institutions in response.

The key debate comes down to this: Who decides what we build and how we build it? These are questions that need to be securely rooted in our democratic institutions
and processes.


Explore more

Gillian Hadfield is a professor of law and strategic management. She is the inaugural Schwartz Reisman Chair in Technology and Society at the University of Toronto Faculty of Law, and director of the Schwartz Reisman Institute for Technology and Society. Previously, she was the Richard L. and Antoinette Schamoi Kirtland Professor of Law and Professor of Economics at the University of Southern California (USC). At USC, Hadfield directed the Southern California Innovation Project and the USC Center for Law, Economics and Organization. Gillian attended a residency at the Bellagio Center in 2022 with a project titled “Reinventing global governance for AI.”

To find out more about Gillian’s work, you can visit her faculty page and her website, or simply follow her on Twitter. You can also find out more about the work of the Schwartz Reisman Institute.

Related