Community Perspective – Linda Eggert

Linda Eggert Linda Eggert

The trolley problem is a well-known philosophical thought experiment: would you pull a lever to redirect a speeding trolley, sacrificing one person to save five? 

Would you feel differently about the answer if the entity responsible for that decision wasn’t human, but AI?

Though this is an exaggerated hypothetical, policymakers are already grappling with the ethical dimensions of AI decision-making. While most discussions focus on avoiding harmful consequences such as the impacts of biased algorithms or deepfakes, Linda Eggert thinks there is another important aspect to the discussion: do we lose anything of moral significance when we delegate certain decisions to AI?

“Instead of applying a theory, you’re working from the ground up and identifying what’s morally relevant,” says Eggert. “Morality is multi-dimensional—whose rights are implicated? What sacrifices might you have to make? How are you going to respond to them?”

Linda Eggert is a 2023 AI2050 Early Career Fellow and Assistant Professor in Philosophy at Stanford University. Her essay, “Duties to Rescue and Permissions to Harm” won the American Philosophical Association’s 2023 Frank Chapman Sharp Memorial Prize, awarded every two years to the best unpublished essay or monograph on the philosophy of war and peace. Eggert’s work is in moral, political, and legal philosophy with a focus on non-consequentialist ethics. Her AI2050 project, “The Ethics of Delegating to AI” explores issues such as how to justify a right against automated decision-making and how human rights should shape the technologies we develop. This project addresses Hard Problem #10, which explores the question of what it means to be human in the age of AI.

The AI2050 initiative gratefully acknowledges Fayth Tan for assistance in producing this community perspective.

What ethical or philosophical frameworks do you consider most important in your work?

Not least in Silicon Valley with its enthusiasm for optimization, a popular assumption is that the most important thing is to bring about the best consequences. I push back on that in my work, which is a bit more pluralistic. Of course, consequences are important—but there are also many other things that matter, for example rights or considerations of fairness. There are a plurality of moral factors.

What questions or approaches might help people think about ethical issues in a more pluralistic way?

I think moral reasoning itself is ultimately pluralistic. I don’t think you should apply one theory to every problem you find. Start with the concrete problem that you’re trying to understand and identify what’s morally relevant. 

How people are affected or whose rights and interests are implicated are often important, but that isn’t necessarily the end of the question. Sometimes rights conflict, sometimes you have additional responsibilities, to recompense people whose rights you might have infringed.

Morality is a lot more multidimensional than just working out what you ought to do. It also involves considering what sacrifices you might have to make, and how to respond to those sacrifices.

How has AI created a need for different ways to think about rights?

One question is how we should think about justifying potential new rights—the right to a human decision, for example. You might think about consequences, like accountability, or people being able to advocate for themselves, and so on. Alternatively, you might think about what it means to say that people are the kinds of beings about whom certain decisions should be made by another person, with certain kinds of capacities.

It matters not just what rights we have, but also why we think they’re important, whether they matter to further some other good or because of what they say about us.

Your research efforts tackle many issues across AI. Do they share common intellectual concerns?

I think they’re all united [in asking] to what extent familiar moral principles apply once we broaden the scope of concern. How do concerns about morality and justice apply to AI and agents that aren’t human?

There’s also the broader motivating concern of what we owe one another as humans, especially as we continue to make AI more powerful and more prevalent. Different elements of the project examine different dimensions of that question. Some are more abstract, like what’s at stake once we start delegating decisions to AI. But some are more concrete, like how we regulate life-and-death decisions in the context of  self-driving cars and autonomous weapons systems.

What questions about AI would you want more people to be aware of?

First, even if AI could do a better job, there might be some things that we think are valuable to do ourselves. You might think about the idea of self-governance, or maybe writing your wedding vows. I think one question worth thinking about is what those things are. 

Second, how can we best exercise agency over our technological future? Sometimes, the discourse sounds almost deterministic, as if there’s something inevitable about AI’s role in our lives. But as democratic citizens, we have agency over our future. How do we make sure people get to decide what AI is going to be used for, rather than it being a thing that just happens to us. 

Recently, we’ve seen many higher education cuts, and humanities disciplines like philosophy have suffered most. What do you lose when you lose access to philosophy?

The short answer is everything. Clear thinking, the reflex to question everything, pursuing the truth for its own sake, freedom from ideology and dogma…asking questions like why does the truth matter? What is the right thing to do? And what makes something the right thing to do? 

Questions like these have preoccupied people for millennia, but the nature of their urgency changes. Now we’re grappling with, say, the value of truth in part because of deepfakes and misinformation. And we’re trying to work out whether, if a person may do X, this means that a robot may also do X. It’s humbling for philosophers! And exciting, because AI is a huge opportunity to learn about the world.

We need philosophy, and the humanities more broadly, to keep alive ways of thinking and the skills we need to work out what we care about and to shape our own lives and future.