Creating robots that can interact with the world just as humans do is many AI scientists’ dream—but for Fei-Fei Li, how we realize that goal is just as important as achieving it. As a pioneering AI researcher, Li’s work has consistently been guided by the principle that AI should serve genuine human needs.
“I believe getting machines to help us empowers global productivity and prosperity—but we have to do it right,” says Li. “That’s why my work is always human-centered.”
Fei-Fei Li is a 2023 AI2050 Senior Fellow and Professor of Computer Science at Stanford University, as well as the Denning Co-Director of Stanford’s Human-Centered AI Institute. She is an elected member of the National Academy of Engineering, National Academy of Medicine, and American Academy of Arts and Science. In 2024, she co-founded and became CEO of World Labs, a spatial intelligence AI company.
Li’s current research interests are in developing human-centred AI and embodied intelligence for healthcare. The goal of her AI2050 project is to develop robotic prototypes that can perform 1,000 everyday household tasks. Her work addresses Hard Problem #1, which concerns surpassing current scientific and technological limitations in AI to enable further breakthroughs.
The AI2050 initiative gratefully acknowledges Fayth Tan for assistance in producing this community perspective.
I’m fascinated by the possibility of intelligent agents that understand and interact with the world, with the goal of being collaborative and empowering people. Our society has so much need, [whether] that’s people in need of help or for work to be done in a safer, more productive way.
The science that fascinates me is marrying deeper spatial intelligence, which is the ability to see and understand and reason about the 3-D world with embodied intelligence. When we think about robots, we tend to think about movement, but it’s actually a higher ability, whether that’s grabbing a cup of water or slicing up an apple. It’s about making sense of how to move and what to move, as well as how to plan for moving—all of which involves a great deal of visual and spatial intelligence.
Bringing data into robotic training is much harder than collecting pictures. One of the projects supported by Schmidt Sciences is called BEHAVIOR. A lot of robotic research demos are very contrived, like putting a color block on top of a differently-colored block. BEHAVIOR simulates an everyday human environment to push the envelope and train robots to do complex everyday tasks.
We believe that if we can train the robotic brain with a lot of simulated data, we can bridge the gap to the real world with a much smaller amount of real data, since real data is hard to collect. It’s [also] open source, because [we] want to open it up to everybody, [and for] everybody to use it for benchmarking.
Humans could use robotic help in many situations—search and rescue is a great example. It’s dangerous for the people who are trapped, but also for the rescuers. That’s the first area where we think robots can do a lot, because we don’t want to put anybody in harm’s way.
Second, machines can change the nature of labor. In agriculture, the population of workers has dramatically decreased over the past 200 years. Today, a lot of agricultural work that is labor-intensive, repetitive, and is not very productive when done by humans is done by machines. Humans do different things instead, like drive the machines. We can work with policymakers and economists to recognize and forecast changes so that we’re prepared.
The third area is sectors facing a labor shortage, such as elderly care and healthcare. We’re not only lacking nurses or home care workers, we’re also taking people out of the workforce—especially women. In many societies, women are essentially compelled into doing domestic labor that no one else wants or is available to do. When people are surveyed about what they’d want robots to help them with, most answer household chores like cleaning toilets or washing floors.
Nobody can take human care, the support of family and friends, away. But having machines in the loop of care could return dignity to both caretakers and people who need care. For instance, the dignity of women who spend years in the home doing care work they might not want to do, or perform without financial reward. Many elderly people I talk to bring up the idea of self respect. Some might not want another person to assist them when they are going to the bathroom, for example. Going by themselves would be best, but if not, doing it with the assistance of a machine that they trust would give them their privacy and dignity back.
The concentric ring of human-centeredness is made up of the individual, then the community, and then society. At the level of the individual, one of the most profound wrongs of machines is when they take away human dignity. Years ago, I interviewed a warehouse worker who was put on such an intense workload by machine surveillance that he suffered injuries as a result. This is where we are not using machines in the right way, and it tends to hurt communities who have less say in how we use them.
Every time we develop technology, we develop a double-edged sword. It can help us, but it also reshuffles power, money, and the relationship between who’s at the receiving end. We have to be careful because when that happens, vulnerable communities don’t necessarily come out on top.
For example, machine learning bias impacts people of color in computer vision. But the source [of that bias] is humans. Humans generated biased data, humans curated biased data, and humans failed to recognize the bias and propagated it into machines. Blaming machines is wrong. We should take accountability and be keenly aware of community-level impacts.
On the societal level there’s the changing nature of jobs and tasks. We might not mind automating some tasks in our job, since it changes the nature of the job without taking it away. But how would people stay skilled as a worker? How would they deal with a wholesale job change? We need to forecast and understand the impact of these questions, and have good education and policy that addresses these impacts.
I always encourage young people to chase their passion, because human creativity is an extremely precious thing that’s unleashed when we’re passionate about something. I encourage young professionals to think about AI as a benevolent tool and to think about benevolent usages and applications. Use that as a guiding light to find your passion.