Community Perspective – John Tasioulas

Q&A with John Tasioulas, AI2050 Senior Fellow

John Tasioulas is a professor of Ethics and Legal Philosophy at Oxford University and the inaugural Director of the Institute for Ethics in AI at Oxford University’s Stephen A. Schwarzman Centre for the Humanities. These days his research is focused on the interaction between AI algorithms and the rule of law. 

For his AI2050 project, Tasioulas is creating an ethical framework and guidelines for the development of AI systems. As part of this framework, they are developing new human rights (or revisions to existing human rights) that Western societies should adopt in anticipation of the “age of AI.” They will then create guidelines to help AI developers incorporate these rights into their systems. Finally, they are exploring approaches for incorporating AI into “deliberative democracy” efforts, to make them more effective.

An example of a new human right is the “right to a human decision” for automated decisions. If an AI makes a decision about a person, should that person have a right to have the decision reviewed by a human? On the face of it, such a right makes sense: when humans like administrators, regulators, and judges make decisions, it’s frequently possible to appeal those decisions to a higher power. In recent years, frustrated customers have learned to ask to see the manager or supervisor of a person whose decisions they question.


Learn more about John Tasioulas:

What does the “right to a human decision” look like in the age of AI? Is the review performed by a human using a more powerful, better informed AI, or is it performed by a human who lacks data and is presumably more likely to be persuaded by factors such as the color  of a person’s skin or the attractiveness of their face?
John Tasioulas, AI2050 Senior Fellow

The way I understand the right to a human decision, it’s not a right to appeal from an AI-based decision. 

It’s primarily a right to have a decision made by a human rather than an AI system, one sort of manifestation of Joseph Weizenbaum’s thesis that there are some decisions no computer should be allowed to take. The devil is in the detail, in trying to figure out which decisions are encompassed by that right, since obviously not all decisions should be. 

Many people, for example, would viscerally reject the idea that an AI system should determine their sentence of imprisonment in a criminal trial. Are they justified in feeling that way? And is that because there is a right to a human decision? 

You may be right that leaving such decisions to humans could lead to worse (e.g. biased) outcomes, although that’s a matter for speculation depending on both the legal system and the AI system in question. 

But outcomes are not all that matter in life; processes inherently matter too. This is one reason why people support trial by jury even if, for example, trial by experts would yield better decisions. The same applies to democratic decision-making procedures. They find great value in the process of being tried by one’s peers or democratic self-government, even if a hypothetical legal expert or a benevolent dictator would make better decisions.

So even if AI systems could yield better decisional outcomes (a massive “if” at this point), there is still the issue of whether, for example, they could provide adequate (justifying) explanations for them, whether they could be answerable for them, and whether their decisions would manifest the kind of reciprocity and solidarity that can achieved when one human stands in judgment over another human.

Outside of essays, lectures, and law review articles, is this project making any real progress?
John Tasioulas, AI2050 Senior Fellow

Yes, I think so, qualified forms of the right to a human decision are embodied in Article 22 of the European Union’s General Data Protection Regulation and in the Blueprint for an AI Bill of Rights released by the White House Office of Science and Technology Policy last year. 

But again, it’s one thing to announce such a right, another to specify its content in a workable and compelling way. It’s still early days and the work needs to be done.

How is this right part of an ethical framework for the deployment of AI?
John Tasioulas, AI2050 Senior Fellow

I’d say this right should remind us two things about that overall ethical framework. The first is that the whole of human rights applies to AI and digital technology, not just the rights to non-discrimination, free speech or privacy, which tend to hog the spotlight. Indeed, we may even have to add new rights to our existing schedule of human rights to deal with the challenge posed by new technologies. I believe the right to a human decision is one of these novel rights. 

The second point is that we must resist the idea that the ethical framework for the deployment of AI is exhausted by human rights considerations.

Can you give some other examples?
John Tasioulas, AI2050 Senior Fellow

There are many ethical considerations that bear on the development and deployment of AI that are not human rights. For example, the environmental impact of AI systems cannot be a matter exclusively of respecting human rights, it’s also a matter of our duties to animals and other parts of nature. Moreover, there are virtues we need to practice in the AI world, such as honesty, civility, mercy, etc that again are not things that people necessarily have a right to. So, a right to a human decision would be a small but important fragment in this larger ethical mosaic. But what intrigues me especially about it is that it is a right that seems to have been called into existence by the emergence of AI, because AI is the first technology in human history that offers a real  prospect of replacing human decision-making across a broad range of domains.

This seems like a pretty ambitious project. How do you get people to adopt it?
John Tasioulas, AI2050 Senior Fellow

Well the first thing to say is that we are still at an early stage, and an important part of the project will involve dialogue with people in the tech industry, government, NGO sector etc. as this is crucial to formulating a morally compelling and practically feasible interpretation of the project. 

But beyond that the hope is twofold:  First, to publish work that  is accessible to the general public and hence can enhance the quality of public deliberation about the ethics of AI, since history shows that radical change tends to be driven by bottom-up social movements. Second, to use Oxford and Yale as convening  spaces in which we can help influence policy on these issues by engaging with those decision-makers in business and government who feel both a great burden of responsibility to get AI policy right but also considerable uncertainty about what getting it right amounts to.

The framework seems to be pretty entangled with ethical concepts of Western societies and liberal democracy. How does a framework like this apply in countries like Russia and China?
John Tasioulas, AI2050 Senior Fellow

That’s a good question. I think in these cases it is important to distinguish between the attitudes of governments and the attitudes of citizens, especially in authoritarian states. 

We cannot assume that governments always reflect the genuine values of their citizens. In general, though, we all have to begin from some cultural background, we can’t escape that. But we then have to reflect on our cultural assumptions and be prepared to have them challenged. In particular, they have to enter into dialogue based on goodwill with people from different cultures, provided they are willing to reciprocate the goodwill.

I think there are crudely two stages here. First, trying to identify what a sound AI ethics looks like, partly through the inter-cultural dialogue I have described (and there may not be a single uniquely best such ethics, but a range of eligible options, just as there is not a single uniquely best occupation for most people). Then, given that it is unlikely that everyone will agree with what we take the best AI ethic to be, we have to consider what compromises we are prepared to make with people who have different perspectives, including countries like China and Russia, in order to achieve consensus on workable practical standards domestically and globally.

Regarding the right to a human decision, I’m not at all convinced that one has to be a democrat or a liberal to find the prospect of an AI system having authority to sentence you to prison or to kill you on the battlefield an appalling, inhuman prospect. More generally, on human rights, it’s important to remember that almost all states purport (whether sincerely or not) to adhere to them, which does furnish some common ground for productive engagement.

A lot of people are saying that we better adopt ethics and guardrails for AI systems now, because in a few years we will have lost the opportunity to do so. Do you agree?
John Tasioulas, AI2050 Senior Fellow

Well they might be saying that because they think in a few years AGI systems will be in control! 

My view is that ethics, by which I mean standards regarding what a good life is and what we owe to others, is always playing a catch up game in response to technological advances. I just hope we can put in place some key safeguards — safeguards at various levels,  such as in our personal behavior, in professional codes of conduct, in corporate human rights due diligence systems, as well as in domestic and international law — before any major AI-driven calamities befall us. It’s one of the urgent demands of our time.

What do you recommend to students who want to get involved in the ethics of AI?
John Tasioulas, AI2050 Senior Fellow

The key thing I would say to students is that getting a grip on the ethics of AI is a fundamentally interdisciplinary endeavor. It requires you to get out of your particular branch of expertise – whether it be computer science, law, philosophy, economics, etc – and enter into serious dialogue with others in a spirit of humility, i.e. on the assumption that you stand to learn from them just as much as they stand to learn from you. 

Computer scientists need to learn that they can’t just assume that ethics is a data-driven exercise of optimization (maximal fulfillment of our preferences), philosophers need to desist from a priori assertions about what AI will or won’t be able to do, human rights lawyers need to abandon the view that all the ethical problems of AI can be dealt with by existing human rights standards that were first developed without any thought of AI. 

The problems in AI ethics are fundamentally interdisciplinary, and this is why the arts and humanities have an important role to play, as I wrote here: The role of the arts and humanities in thinking about artificial intelligence (AI). 

It’s not easy to move out of one’s disciplinary comfort zone, but that’s what’s required. One of the best aspects of my job has been encountering many students and younger academics who are enthusiastic about this kind of interdisciplinary engagement. They are not yet set in their ways. They give me hope for the future.