Bellagio Conversations in AI/

Mary L. Gray on the Power of AI Access for All of Humanity

Throughout history, revolutionary new technologies have required new approaches to regulation and oversight. In her upcoming book aimed at technologists interested in responsible computing, Banality of Scale: An Anthropologist’s Fieldguide to the Future of Computing, anthropologist and Microsoft Research principal researcher Mary L. Gray includes AI as the latest example of a technology whose production and distribution has profound implications for human rights.

Mary’s Bellagio Reflections: “I went to Bellagio thinking I had already completed a very rough draft of a book, and while there I tried to answer this question: ‘What can we do when it seems inevitable that so much data about us will be collected, and so much modeling is based on superficial examples of everyday decision-making scraped from the web?’ The residency gave me even more conviction. We are in a moment of opportunity. The most important thing we can do is take advantage of the need for large-scale data to develop AI by demanding that this experimental research be held accountable to a broader set of social obligations, since AI must learn from people and people’s data.

“The lack of public governance of AI is having such an impact already. There’s a palpable feeling that we’re all trapped in a race among a few companies and countries without a voice representing the public’s interest. I was fired up after Bellagio, convinced that there really was something we could say – something that I have to say – and it was sharpened by conversations with the other residents about both the history of AI and the possibilities for its future.”

Here, Mary argues that measuring what AI can or cannot do is a distraction from a more crucial task for policymakers and engineers: ensuring its development is democratic, open, and regulated.

When it comes to AI, and the development of large language models in general, my fear is that we’re worrying about the wrong things. Focusing on philosophical concepts like sentience, or even AI’s ability to make individual decisions in isolation, distracts us from the crucial opportunity we have to bring more public governance and engagement to the direction of this technology’s development. In spite of what many of us think, AI cannot generate ideas and decisions on its own; it needs a continuous stream of human decisionmaking to advance. Our biggest challenge is to realize that, in all cases, the benefits and successes of AI hinge on the conscious decisions we make about when, why, and how we use it – and on who makes these decisions.

The history of technological innovation gives us a good reason to pay attention to the so-called margins where we imagine technology trickles down in society. Whether it was the telegraph, the internet, or smartphones, the reality was that new technologies followed the flow of power and money, even if in principle they should have been accessible to everyone. The threat with AI is that we create yet another technological divide that people have to work to overcome. I study those moments of hubris, when that belief that technology will inevitably flow out and down resurfaces. Given the power imbalance involved, there is no evidence technologies in and of themselves create level playing fields. Tech, in fact, sharpens divisions that already exist, unless active efforts are made to counter such trends. If we believe in the power of this technology – and I do – then we must understand that that power comes from how we share it and its value to the full range of society, not just those individuals seen as its paying customers.

I believe that the most important thing we can do right now is realize that the most creative, effective, and productive uses of generative AI are going to come from us being very aware, collectively, that its value to humanity is dependent upon the breadth of humanity having access to it. It’s vitally important to see access to, and oversight of, AI as a social challenge and activity. We should work to actively move new technologies into new places in our lives, and do that globally, and equitably.

This goal requires three priorities. The first should be to ask ourselves, “What are the decisions we want AI to be making, and who are the people being left out or pushed out of decision-making as a result?” A model, by definition, is less detailed than reality. Its equation necessarily leaves out much of the rich complexity of our social worlds, which in turn means leaving someone out. For example, anyone who hasn’t generated a digital signal of how they make decisions isn’t included – that means many oral cultures and all sign languages globally.

  • Any AI does not represent some kind of “typical” humanity, and its responses are always relative to the data we've given it.
    Mary L. Gray
    Senior Principal Researcher at Microsoft Research; Faculty in the Luddy School of Informatics, Computing, and Engineering at Indiana University; Faculty Associate at Harvard University’s Berkman Klein Center for Internet and Society

We should always have transparency about what kind of data was used to train an AI model – what those missing things are, how that decision was made, and who made that decision to exclude certain human signals from their “foundational” model.

Our second priority should be to ask whether foundational consent was sought. If a group of people consent to contribute data to train an AI model, did they actually understand what they were agreeing to? Governance requires that people be made aware that they have the right to choose whether or not they want to be part of training a large-scale AI model. The future of AI will not be productive if its development isn’t rooted in mutual agreement and societal harmony.

Our third priority should be to get away from this sense that once a model has been released, that’s it. Currently, models are being created by institutions and private companies without guardrails or ongoing monitoring. There should be a regulatory commitment and responsibility, on the part of anybody profiting from a generative AI, to be transparent about its risks and benefits from development to deployment, including its integration into other systems. What did the creators think would happen before developing it, and what type of impact does it have after it’s released? Are the creators keeping track of their responsibilities?

At the moment, outside of privacy and security, we are missing meaningful regulation for the tech industry. History offers important lessons about what happens when powerful industries break public trust. For example, in the field of biomedicine, we expect clinical trials to be conducted with respect, transparency, justice, and beneficence, with clear statements of risks and benefits for participants. Those guardrails didn’t appear out of the blue – public health and related industries, which depend on public participation to innovate, learned some very hard lessons in the 1960s and 1970s when they were regulated for trampling over public expectations.

We also know from history – from the telephone, the railroads, and other utilities that the public came to depend on – that they’re not just a “nice to have.” They’re essential, and so we’ve found ways to regulate them. In the case of AI, which likely will be neither “just” a product nor “just” a service – especially given that it relies on people interacting online to build its datasets – the private sector should be held accountable for a set of public obligations that companies take on when they have the power to shape how society operates at scale. I can imagine AIs being labeled with a “nutrition label” of sorts; in fact, there’s a great organisation, the Data Nutrition Project, that does just that, offering details about where a model’s information comes from, where it will be going, and who to blame for inappropriate outputs.

We have a fundamental human right to claw back our autonomy, and we have the right to be respected for who we are, who we engage with, and what those connections mean to us before they are extracted and then dumped into a model that may erase our value as individuals. And we must assert these rights, because those are also the three ingredients of AI.

These rights are not just rights of consumption, but of our essential humanity. At stake are our citizenship, our humanity, and our very global flow of connections. That’s what we built over the last 40 years through the large-scale diffusion of computing. We can either walk away from the internet and every other infrastructure that’s been digitized in that time, or we can realize that we’re in it too deep not to reorient AI towards what we truly want and need from technology.


Explore more

Mary L. Gray is Senior Principal Researcher at Microsoft Research; Faculty in the Luddy School of Informatics, Computing, and Engineering at Indiana University; and Faculty Associate at Harvard University’s Berkman Klein Center for Internet and Society. Her work focuses on how people’s everyday use of new technologies can transform labor, identity, and human rights. In 2020, she was named a MacArthur Fellow for her contributions to anthropology and the study of technology and digital economies. In September 2022, she was a resident at Bellaggio, where she worked on her upcoming book Banality of Scale.

For more information on Mary’s work, you can visit her website or follow her on Twitter.

Related