Linda Eggert 2023 Early Career Fellow
Affiliation Incoming Associate Professor, University of Oxford Hard Problem Solved what it means to be human in the age of AI, or John Maynard Keynes’ problem when he noted, “Thus for the first time since his creation man will be faced with his real, his permanent problem— how to use his freedom from pressing economic cares, how to occupy the leisure, which science and compound interest will have won for him, to live wisely and agreeably and well.”

Dr Linda Eggert is an incoming Associate Professor in Philosophy at the University of Oxford. Her work spans issues in moral, political, and legal philosophy. Linda is especially interested in non-consequentialist ethics, the ethics of rescue and defensive harming, theories of justice, and, inescapably, the relationship between human rights, democracy, and the moral and political implications of AI. Before taking up her current post, Linda was an Interdisciplinary Ethics Fellow at the McCoy Center for Ethics in Society at Stanford University, a Fellow-in-Residence at the Edmond and Lily Safra Center for Ethics at Harvard University, and a Technology and Human Rights Fellow with the Carr Center for Human Rights Policy at the Harvard Kennedy School. Linda has also taught at Apple University. Through this fellowship, Linda will advance her project “The Ethics of Delegating to AI”, which seeks to help us better understand what, if anything, of moral significance is lost in eliminating human decision-makers in central areas of human activity, and what we owe to one another, including as citizens of liberal democracies, as AI is made increasingly more powerful and prevalent.

AI2050 Project

Linda Eggert’s AI2050 project examines the ethics of delegating to AI. Its overarching question is: what moral responsibilities do we have to one another as we face unprecedented technological possibilities of delegating decisions to AI? What of moral significance might be lost in eliminating human decision-makers; and do we have a right against algorithmic decision-making in certain contexts? By considering central debates – including about rights, democracy, and justice – in moral and political philosophy, this project throws new light on the fundamental question of what it means to be human in the age of AI.