A sentence that Tobias Rees hears very often while working with AI engineers is: We’ve run out of metaphors for how to think. “I love that sentence because it indicates that we’ve left the terrain of the already known, that we find ourselves in an outside in which our established terms of thought and practice don’t quite work,” says Rees. “And I think, okay, which metaphors fail? Why? Where are the fault-lines? And how can they serve as bridges into the new?”
Tobias Rees is a 2023 AI2050 Senior Fellow, and the founder and CEO of Limn, an AI studio located at the intersection of philosophy, art, and technology. Prior to founding Limn, Rees was William Dawson Chair at McGill; Reid Hoffman Professor of Humanities at Parsons/The New School; Director at the Berggruen institute; and founder of ToftH.
Rees studies the history of thought: how underlying assumptions organize the way a group of humans thinks, lives, and understands the world at a given time. For his AI2050 project, working with researchers and engineers, he will explore the fundamental concepts and assumptions that shape the way we build and think about AI. AI displays abilities— such as learning or communication—once thought to be exclusive to humans or living things, but is neither. It resists the existing categories of “human/living thing” or “machine”. By following how concepts such as “human” and “machine” emerged, evolved, and have been challenged by AI research, Rees lays the philosophical groundwork to address how we might think about –– and build –– a technology that confounds our presuppositions.
A new, distinct system of thought developed around AI may seem like an abstract hypothetical. But Rees points out that it wasn’t so long ago when we assumed that human thought everywhere is organized by the same categories. He mentions a 1924 talk at the Société de Psychologie by the sociologist and anthropologist Marcel Mauss. Mauss stated that after decades of study, he was convinced that different human groups had, over time, had many different categories of mind, saying, “there are still many foreign moons in the firmament of reason.”
“Maybe,” Rees asks, “AI is a new moon in this firmament?”
The AI2050 initiative gratefully acknowledges Fayth Tan for assistance in producing this community perspective.
Your work deals with AI as a philosophical event. What about AI made you sit up and say “This is different, this is something we should pay attention to”?
There are three driving questions for me. Can we discover the concepts we live by, the many conceptual assumptions that we inherited from the past and that shape how we think and understand? Can I write the history of these assumptions? Up to the point where they didn’t exist? And, third, is there anything in the here and now that is new, so radically novel or different that we cannot think about it in terms of the assumptions that we have inherited from the past? I am deeply curious about things that break away from the already known. I think that’s the case with AI.
One way of putting it is that AI has qualities, or at least participates in qualities, that we thought only humans or living systems can have, like learning, or communication, or having agency. By “agency” I mean that AI systems do things on their own, on their own terms, on the basis of what they have learned. This agency is not programmed and hence not reducible to the people who built it. Even though it has an agency, AI is neither alive nor human. Instead it is a technical system. So I have two categories: human or living thing on the one hand and technology or machine on the other—and AI fits neither. It’s kind of an in between thing. I’m really curious about this in-betweenness—AI lies outside of the categories that thus far have stably organized our world. I want to understand this outside, to develop new vocabulary, to make it navigable.
Do you think that AI represents a destabilization of the boundary between human and machine? Is this a source of tension or conflict?
Yes, I think so, it is a destabilization of the categories that we took for granted as timeless. But it is helpful to recall that these categories are not timeless—there was a before, and now there is an after. A new, still nascent and unknown space of possibility that lies outside of the space we existed in emerges.
I’m really curious about this new space. But it is not easy to explore it. As a philosophical researcher, I can diagnose and identify old concepts that we inherited from history that no longer work. But that is different from exploring the contours of the new. With AI, you cannot know the new unless you actually build it. It’s as if one had to do philosophical research in terms of engineering –– or engineering R&D in terms of philosophy.
This is challenging because the majority of AI labs and companies build AI as a machine. More specifically, they build AI so that we can push the automation paradigm we inherited from the industrial revolution into domains of work that we thought only humans can do. On the one hand, I have no critique to offer. I understand the impulse. On the other hand, I think it is a mistake to keep AI in the machine box. The good thing is that there are quite a few engineers who are eager to do engineering to identify what I would call the philosophical newness of AI. I think that understanding this newness is a key to building maximally useful AI systems for humanity. One of the focuses of my fellowship is to find ways to un-differentiate philosophical research and AI research and development—though, how to do that, no one knows yet.
How do you create space to think about that ambiguity?
Most generally, I think, I offer examples that make available different reasons for building AI.
For example, if you look at how humans understand and play chess, you find that most creative innovations were modifications of well established understandings of the game.
Now, let’s compare this with the rather dramatic ways in which AlphaZero (an AI program created by Google DeepMind to play chess, shogi, and go) changed the game of chess: it literally introduced moves that are difficult to describe other than ‘that came from outside of the human history of chess playing.’
Humans have a certain mental map of the game. We do innovate, but innovation is always with reference to the mental map we already know and have. A sort of mental model, if you will. AlphaZero came from ‘outside of history.’ It learned how to play chess by playing against itself, unencumbered by how humans understand and play the game. AlphaZero built a mental map of the game that is different from –– that is mostly outside of –– the one we have.
It was making available a logical space of possibility that was not there before, one discontinuous with the logical space we already knew, within which we played thus far. If a player enters this novel space, they can think thoughts they couldn’t think before. I think that adding such new mental models or logical spaces is one of the superpowers of AI.
Is there an analog from how we’ve changed in our thinking about non-human intelligence in animals, for example, that might provide a roadmap for how we might change the way we think about intelligence in AI systems?
We’ve thought for a long time [that] humans are the only instance of intelligence. Animals maybe were considered to have some intelligence as well. However, their intelligence was assumed to be of a much more limited kind than the intelligence we humans have.
One of the most beautiful things about machine learning is that it offers a very different concept of intelligence. That is, intelligence [in its] capacity to learn. And what enables learning? Neural structures.
Neural structures come in many shapes and forms. For example, some intelligent things have a central nervous system, others have a distributed nervous system. Some have a cortex, others don’t. Birds, for example, do not have a cortex, which is why historically they were considered stupid. Now we know that several bird species have cognitive skills on par with non-human primates.
In short, it turns out that intelligence must be thought of in the plural. Instead of capital I Intelligence, we should think of many different kinds of intelligences, each enabled by a unique neural structure.
And I can push this further: Some intelligent things have biological neurons, and others have artificial neurons. I do not necessarily mean to say they are the exact same thing. I am interested in the new, different intelligences that artificial neurons can add.
This goes to the beginning of AI being neither human nor machine—[it’s] a specific kind of intelligence—what could it be if it’s neither this nor that?
How would you introduce a scientist or researcher to thinking about their work philosophically, particularly when they might not have done it before?
With some of the money from the AI2050 Fellowship, I’m doing a podcast called In The Wild. We call the podcast In The Wild to convey that this is not philosophy in the ivory tower but in the real world.
The idea is to invite AI researchers to conversations in the course of which I try to discover what about their work is philosophically new. To do this, I listen to their work, to what they do. And then I go back in my mind to the history of thought—some of the key categories like what is the relation of their work to society, human nature, technology, machine, organism and so on—and I look intently for anything in their work that lies outside of the history of thought.
And then I tell them.
Very often, scientists and engineers I talk to love it. In this moment—their work gains a philosophical dignity. It’s a moment of pride, and rightly so. But once they have this experience of philosophical dignity, they’re also liable to ask what you can do to attend to these insights— and at that point we can begin to do collaborative work.
I love that phrase, the experience of philosophical dignity. I’ve seen that look on scientists’ faces before, and it’s always quite something. And after that experience, there’s no going back, isn’t there?
Sometimes people think that’s a trick of me, to fool them into this philosophical dignity. But it’s such an honest experience. A good friend of mine is a theoretical physicist, and tried to take some philosophy classes in college but couldn’t quite enter the conversations. It was a cultural mismatch. But the philosophical questions didn’t go away, ever. Among the many, many engineers who build AI, there are going to be a deep philosophical set of experiences. To discuss these experiences without judgment, to discuss their work in a way that it truly gains its full philosophical relevance as a sort of experimental philosophy—I think for some, it’s a relief. And then collectively, it’s an experience of joy. And that is exactly why I hope to bridge this abyss, the abyss that traditionally sets philosophical research and experimental engineering apart.