Bellagio Conversations in AI/

Tim O’Reilly on AI’s Role in the Attention Economy

AI is not the first technological revolution new media publisher and author Tim O’Reilly has experienced – he even popularized the terms “Web 2.0” and “open source.” But, after decades of analyzing patterns at the intersection of technology and the economy, he believes it offers a new, and more productive, way to engage with endless content.

Tim’s Bellagio Reflections: “We need an economic theory for the information age that explains how value is created and distributed, but also how companies can abuse the positional advantages that they earn to keep more of that value than they should, so that they become a blockage in the ongoing creation of new value. I’ve been trying to understand why companies can start out idealistic, yet eventually become like all the rest – as seen by Google removing its corporate slogan, ‘Don’t be evil,’ in 2019.

“Working with my fellow Bellagio attendee, economist Mariana Mazzucato, we explored how the concept of economic rents can be applied in the digital era. We realized that in two- or three-sided markets in which users typically get services for free, what is happening is that platforms use their algorithms and designs to encourage users to spend excess time on the platform, which can then be monetized on the other side of the market. We call these ‘algorithmic attention rents.’ We’ve also written about the need for enhanced disclosures in markets for new kinds of technologies. We developed these ideas into a research proposal that ended up funded by the philanthropic firm Omidyar.”

According to Tim, we don’t yet understand how attention rents will show up in emerging AI services; that’s a focus of future research. But as he sees it, we already know that AI can help us navigate unimaginable oceans of online information, as long as we regulate it carefully.

The interesting questions about AI revolve around both its transformative potential and how we govern that potential – and every time there’s a reset in the computer industry like this, the lessons of history give us an opportunity to do things better. AI is one of these big resets; this new technology is capable of not only saving consumers’ time, but also empowering them to be more creative with information.

As Herbert Simon wrote in his 1971 paper discussing the attention economy, “​​A wealth of information creates a poverty of attention.” The history of the internet is one of putting machines to work to save us time in both navigating abundant information and managing our attention to that information. But things have changed, and many algorithms are now actually taking our attention. The current focus of trying to understand AI’s impact on the future of human flourishing and wellness is good because it could radically change the economy for the better – but it could also radically change the economy for the worse.

Currently, the internet is defined by two-sided markets; users are drawn in with free services, but what are they getting in exchange? The argument has been that the companies that offer those free services are monetizing our data, or even that they’re “stealing” our data. I think that’s the wrong argument. The right argument is that they’re misdirecting our attention; when we go down a rabbit hole on YouTube or Google, that’s not providing a service in our best interests. They extract “attention rent” as well as monetary rent. This is the result of 40 years of neoliberal economic theory, the ideas of which have made our corporations fundamentally antisocial. There have been attempts to push back against it – for example, with ESG [environmental, social, and corporate governance] standards – but we don’t have a new theory of the economy yet that says what should replace it.

As the economy increasingly shifts from one based on objects to one based largely on information, we need to be thinking about what that will look like, and how to create policies that will lead us in the right direction. In this context, AI offers enormous possibilities for new kinds of discovery.

  • The combination of AI and science, or AI with culture, will deliver profound benefits when rooted within this larger economy of information abundance.
    Tim O’Reilly
    Founder and CEO of O'Reilly Media

This is alongside many of the other ways in which AI will help us deal with the world’s very largest problems, some of which I explored in my 2016 book WTF? What’s the Future and Why It’s Up to Us. By automating intellectual work, AI can help us manage the demographic inversion, in which we will see more elderly than young people in many countries. It will give us more leisure time to spend on friendships and creative pursuits, as well as on caring for each other. Lifestyles that are currently only available to the wealthiest could be available to many more people.

However, while there is enormous potential for enhancing human creativity and productivity, we need to use this AI moment as one for deep self-reflection about who we want to be, and how we want to act. We need to understand that when an AI shows us bias, we should be thinking about how we trace that back to the source. For example, racial bias in sentencing algorithms originates from the biased decisions of human judges. It takes humans to elicit that bad behavior. A model doesn’t just do it on its own. We need to fix us, not just the mirror.

In response, I’ve been spreading the idea that we don’t understand this technology well enough at all to regulate it effectively yet. The very first set of regulations should be ones that are designed to increase our understanding. That means formalizing the ways companies currently govern their AI. However, my approach is slightly different from most policymakers and activists. These companies are all saying they want their AI to be fair, unbiased, and helpful to humanity. But what do they actually measure? We don’t know the details of their attempts at “regulation.” There have been many horror stories in the press about AI, and safeguards against misuse are being built reactively. Instead, I think that a repeatable metrics framework, akin to financial reporting but focused on the “operating metrics” that companies use to evaluate and manage the systems they create, would be an ideal place to start.

Those standards can then evolve as we learn more. We can also then see what’s not being measured – after all, these are often large centralized systems, and they could just as easily report the number of people using AI to do bad things as any other usage statistic. We want a framework that encourages a lot of experimentation, but we also don’t want companies to go rogue.

I’m optimistic about the very same things that I’m afraid of. In the Whole Earth Catalog, Stewart Brand wrote, “We are as gods and might as well get good at it.” And then there’s Stan Lee’s line from Spider-Man: “With great power comes great responsibility.” They’re both the same sentiment. We are unleashing amazing capabilities that can be used for good: tackling climate change, geopolitical conflict, and economic inequality. But they can also be used for evil, and we must take that seriously. If our old patterns are built into the systems of the future, it will reinforce the idea that only a select few people are meant to become insanely powerful and insanely wealthy – thereby reinforcing one of the worst biases of all.

Explore more

Tim O’Reilly is the founder and CEO of O’Reilly Media, an American learning company that publishes books and provides an online technology learning platform that is used by thousands of companies and millions of users worldwide. He is also a Visiting Professor of Practice with the Institute for Innovation and Public Purpose at University College London. Tim attended a Bellagio convening in 2019 titled “Designing a Future for AI in Society.”

For more of Tim’s insights on AI, he authored “We Have Already Let The Genie Out of The Bottle” for The Rockefeller Foundation in 2020. To find out more about Tim’s work, read his bio on O’Reilly Media, or follow him on Twitter.