During her time in the European Parliament, Dutch politician Marietje Schaake was heavily involved in E.U. efforts to regulate the technology sector in ways that reigned in its worst impulses while still preserving the openness of the web. Now, in her post-political life as International Policy Director at Stanford University’s Cyber Policy Center, she has come to see the unchecked power of AI companies as a threat to democracy itself.
Marietje’s Bellagio Reflections: “At that time in my life I was in a very reflective mode. It’s hard to explain what it means to step down from 10 years of political office, but it’s kind of like falling out of a washing machine. It’s disorienting. I was just happy that I had time for something wonderful like this, which I hadn’t had during my time in public service. I was able to reflect and learn and listen, and I didn’t just have to be talking, because, as a politician, you’re often the one people come to hear speaking.
“I met the economist Mariana Mazzucato there, and we ended up writing a paper about AI governance together. I’m sure that would not have happened had it not been for that Bellagio meeting.”
Here, Marietje discusses the belated realization among policymakers on both sides of the Atlantic that digital technologies may not necessarily always support democratic governance.
At the time of the AI+1 convening at Bellagio, I had just stepped down after serving as a Member of the European Parliament for 10 years. One of my focus areas had been technology. Before politics, I studied new media at the University of Amsterdam, and while in Parliament, I attended hacker conferences like the Chaos Computer Club to keep up with the latest developments. But when I first entered the Parliament in 2009, technology was a very marginal subject – and hardly political. A lot of people saw technology as equalling social media, and social media was a thing that their kids or grandkids used. People were maybe aware of Barack Obama’s use of data-driven campaigning in 2008, but in Europe we felt like we were far behind, which we probably were. Gradually, the focus deepened on the power of data and corporate “algorithms,” and of course while we talk about “AI” now we’re still really talking about algorithms. Digital technology wasn’t perceived as so consequential, or so systemic.
That changed with the Arab Spring, when the political nature and impact of digitization became more clear. Social media platforms – especially Twitter and Facebook – were used for popular mobilization and documenting human rights abuses, but also surveillance and censorship. And even though we weren’t thinking of the algorithms on those platforms as “AI” back then, we now know how much the companies who develop AI have benefited from the troves of data we all share online. In many ways, it’s all connected.
I wanted to continue being a bridge builder between the worlds of politics, policy, and technology, because so many questions around AI – geopolitics, democratic governance, potential harms – are so pertinent. When I started at Stanford, we aspired to build the first public policy school on the U.S. West Coast that focused on technology governance. Despite its concentration of technology companies, when I started as a critic of outsized corporate governance power, I was very much swimming against the current.
There was long an assumption in the U.S. that technologies would contribute to the advancement of democracy – but they didn't.
Marietje SchaakeInternational Policy Director of Stanford University’s Cyber Policy Center, International Policy Fellow at Stanford’s Institute for Human-Centered Artificial Intelligence
The Russian interference in the 2016 election began to change that thinking, but the watershed was January 6th 2021, with the storming of the U.S. Capitol. In the E.U., we were already working on laws to offer countervailing powers to the outsized power of technology companies, but because these are U.S.-based companies, legislation – or the absence thereof – in America has strong ripple effects around the world.
With AI specifically, this new focus on its potential harms is also shaped by the belated recognition of the harms of social media. That’s frontloaded a lot of concerns, such as disinformation, discrimination, and automation. However, the single greatest challenge in terms of mitigating those harms is the concentration of corporate power. That single issue, with companies holding vast data sets and computing power, is the cause of so many second-tier problems. It prevents academics, journalists, and civil society leaders from independently investigating the workings of AI systems.
When I talk to people still in government about this, my general principle is that access to information about AI systems is crucial, whether it’s access for academic researchers or for regulators. I regularly speak with engineers within these companies, and a number of them have told me that even they struggle to understand how these products work. The unpredictability of AI is unprecedented, and we must be able to probe these systems to develop a better understanding that will allow us to hold their producers accountable.
From a regulatory perspective, AI applications are difficult to grab hold of. The challenges are so individualized, so fluid, and so proprietary. What kind of policies do we need around AI? Are our existing policies well-enforced? I also think that the recent calls from figures like Sam Altman for regulation of the AI industry by the E.U. are calculated, and not necessarily genuine. As soon as there’s a tangible regulatory proposal on the table, he’ll announce that OpenAI is leaving Europe because of over-regulation. We need to force both ourselves, and these CEOs in particular, to be specific, because right now “regulation” essentially means nothing. Regulation is a process that can lead to an endless number of different destinations. I think the tendency to speak about it as something singular shows how the discussion is not as sophisticated as it should be. The lack of a well-informed, common understanding of what AI is and how it works prevents debate about the guardrails we want it to have. It doesn’t help the advancement of public policy, and therefore it doesn’t help democracy. Meanwhile, these companies are racing ahead into the new realities created by their products and services.
It’s become increasingly clear that the whole field of technology policy is Western-centric. People look a lot to Brussels and Washington, and maybe to Beijing or Delhi, but vast numbers of communities around the world are excluded from the discussion. In response, I approached Francis Fukuyama so we could co-edit a volume of papers focused on digital technologies in emerging countries, and at a recent convening at the Bellagio Center we brought together experts from around the world – including from the Global South – to discuss building a more substantial and permanent hub at Stanford for research, education, and analysis, with a global perspective on technology policy that we feel is missing.
We have the outlines of the program, but we’re only set in our intentions, not our conclusions. Our hope is to fund and build a great program at Stanford, and also facilitate new connections between people from the Global South and Silicon Valley, with new partnerships that support local communities without fostering a brain drain. It’s important to do this in a constructive way that doesn’t replicate the wrongs of our colonial past. The harms of these new technologies are very specific, and there are millions of people who feel that the big technology companies are making crucial decisions about their lives, their educations, their democracies, and their freedoms from a place that’s completely out of reach.
People seem either terrified or excited about AI – but while I’m not in the “existential risk” school, I see a lot of reason for concern. What’s crucial is the principle that AI must be governed, and I want that governance to be democratic. I don’t want to live in a society where corporations make decisions about our lives and societies instead. These are big questions, but meanwhile a lot of the attention goes to how cool ChatGPT is. Excitement can cloud people’s analysis, and unfortunately I think democracy is much more fragile than when I was first elected in 2009. Unaccountable technologies and their corporate governors form a challenge to democracy – and we should not take it lightly.
Explore more
Marietje Schaake is International Policy Director at Stanford University’s Cyber Policy Center, and an International Policy Fellow at Stanford’s Institute for Human-Centered Artificial Intelligence. From 2009 to 2019, she was a Member of the European Parliament for the Dutch political party Democrats 66. She attended Bellagio in 2019 for a convening titled “Designing a Future for AI in Society,” and she organized another convening in 2023 titled “Emerging Technologies in Global Contexts: Identifying Education and Policy Needs”.
For more of her insights into AI governance, see “AI’s Invisible Hand: Why Democratic Institutions Need More Access to Information for Accountability,” a report she authored for The Rockefeller Foundation in 2020. More information about her work is available on her her faculty bio, or you can follow her on Twitter.
Related
August 2023
Welcome to a special edition of the Bellagio Bulletin, where you’ll have a chance to hear from leading voices within the alumni network on one of the greatest global challenges of our time – the ethical application and governance of artificial intelligence. We hope you’ll find their points of view as illuminating as we have […]
More