As a philosopher, Tobias Rees studies how advances in AI challenge and change our understanding of what it is to be human – including the differences between humans and machines, and between nature and technology. In 2019, he attended multiple Bellagio convenings on AI, as well as a residency to further his “AI and the Human” fellowship.
Tobias’ Bellagio Reflections: “Everybody came, broadly speaking, from the humanities or the arts, and was concerned about AI. We had good conversations where we could try out our ideas, and there were interesting clashes in perspectives. Somebody from the European Commission said that we shouldn’t build technical systems that cannot be fully controlled and understood by humans. I pointed out that’s not so easy, and she looked at me and said, ‘Then don’t build them.’
“The fear of change, the rejection of newness that questions inherited values, was a bit shocking to me. I am squarely on the side of the new. Of course, that doesn’t mean I don’t find AI troubling. At times, AI leaves me at a loss and makes me insecure, despite its unbelievably exciting potential. But I think the best way to shape the new is to be curious about it, to embrace it. Pessimists defend the old while optimists build the future. That insight was a gift from Bellagio.”
Here, Tobias reflects on how his thinking has developed since 2019 and argues that instead of focusing on how AI can mimic humans, we should explore its potential to invent new forms of intelligence altogether.
For the last 400 years, we in the West have taken for granted that there’s an unbridgeable difference between humans and machines. At the core of this distinction was that assumption that humans have something that machines can never have: reason.
To early modern philosophers, reason was not one single thing. It was made up of intelligence, the capacity to distinguish true from false, consciousness, the power of language, understanding, and more. One reason that AI worries so many people is, arguably, that it defies that distinction between us and machines, which is at the very core of our self-comprehension. For example, there is now a big debate in AI about whether or not large language models (LLMs) are capable of “understanding.” Traditionally, at least in philosophy, understanding is only possible between humans because only humans are assumed to be self-aware; we’re able to wonder why we’re here, and what living is all about. True understanding is, at least during the fleeting moment of a conversation, a form of shared world-making between two people. LLMs have a different form of understanding, and differentiating multiple kinds of understandings, plural, was something we didn’t need to do before AI. How exactly to describe this other kind of understanding is a philosophical and a technical question that requires philosophers and engineers to work side-by-side.
As I see it, with AI the world got richer – and more curious. For the first time in history, there is now a second epistemic thing capable of intelligence, knowledge, and language. Given that until recently we were the only other example of higher-level intelligence, it’s forgivable to think of AI in human terms. But it isn’t human. It’s an altogether novel, non-human form of intelligence. When an AI system learns, it builds models of reality that are very different from the models we humans build in our minds.
AI systems can show us aspects of reality – new cognitive and creative spaces of possibility – that we’re simply not capable of discovering on our own.
Tobias ReesFounder and Chairman of ToftH
I love that AI allows us to multiply our concept of intelligence. Wouldn’t it be awesome to move from a homogenous capital-I concept of “intelligence” to a concept of intelligences, each with its own distinctive properties and qualities – and then explore those new horizons? And wouldn’t it be marvelous if we could build AI systems to invent even more kinds of intelligences, and make them available to the world?
If we do, how can we leverage that non-human power of AI? Right now, all of our methods for assessing AI measure how well it can match up against humans. Shouldn’t we invent new measures of non-humanness to measure the non-human intelligence of AI? That would be a powerful achievement, opening up new directions for AI research and, I imagine, also liberating engineers to build AI in ways that don’t just mimic the image of humanity.
Some commentators suggest that we should think of AI as we think of other machines or tools. I consider this bad advice – don’t take it. AI is radically different, and confusing the two misses what makes AI special. The value of a tool is usually reducible to the virtuosity of the humans who use it – like a violin. It’s just a piece of wood with some screws and strings, until a virtuoso makes it sing. I don’t think AI is the same. Its value is that it has an agency of its own that is not reducible to either the humans who built it or who use it, and we find it valuable precisely because it has its own agency. We can foresee a world where we regularly collaborate with non-human intelligences on projects that were unthinkable before the emergence of AI.
Most people I have spoken to and worked with over the last few years agree that AI presents social and ethical dilemmas, because it defies some of the organizational and political norms that make it possible for humans to live together in societies. However, those old norms won’t help us solve these new challenges. They miss the newness and the specificity of AI, especially in a world where most humans live in networks that run diagonal to national societies. These networks are more than human, and AI systems are a rapidly increasing part of them. The age of defining “society” as a territorial and exclusively human thing may well be over. Whenever you have a situation where something genuinely new occurs outside of the frameworks we’ve been relying on, you need collaboration between the curious and the concerned. You need the people who build the new world, and the people in charge of regulating it, to work closely with each other. I’m not sure this is happening yet, but I’ve seen various exciting efforts to take first steps.
AI is a true philosophical event, by which I mean that it creates new realities that we cannot understand or navigate with our old concepts. We need new vocabularies for what it is to be human, and new concepts that allow us to navigate the novel worlds enabled by AI. I think that the general public feels the same. In April 2023, The New York Times had an article on its front page asking whether AI, or any form of “true” intelligence for that matter, actually needs a body. Until recently, that would’ve been an article in an obscure philosophy journal. When a question like that goes mainstream, it indicates that we truly live in philosophical times.
Explore more
Tobias Rees is the Founder and Chairman of tofth.org. He was formerly the William Dawson Chair at McGill University; Reid Hoffman Professor of Humanities at Parsons and the New School; Director of the Los Angeles-based Berggruen Institute; and a Senior Fellow with CIFAR, the Canadian Institute for Advanced Research.
For more about Tobias’s work, you can visit and follow him on Twitter.
Related
August 2023
Welcome to a special edition of the Bellagio Bulletin, where you’ll have a chance to hear from leading voices within the alumni network on one of the greatest global challenges of our time – the ethical application and governance of artificial intelligence. We hope you’ll find their points of view as illuminating as we have […]
More