John Zerilli is a Chancellor’s Fellow (Assistant Professor) in AI, Data, and the Rule of Law at the University of Edinburgh. Zerilli started off his career as a judicial clerk and spent three years as a lawyer before changing careers, earning a PhD in cognitive science and philosophy, and then carrying out research at the universities of Oxford and Cambridge. His most recent book, A Citizen’s Guide to Artificial Intelligence, was published in 2021. He is working on Hard Problem #9, (ensuring that human governance institutions keep up with and harness AI progress for the benefit of society).
Is there a difference between traditional automated decision making on the part of governments and new decision making using AI?
I think so, yes. Old-school expert systems just didn’t pose the same issues. They were predictable, interpretable, and limited in their applications. Some of these functioned as more or less simple scaffolds to assist the decision-maker. For example, they might be structured as flow-charts, so that the decision-maker would follow the steps, then they might be prompted to check this or that (“did the welfare recipient work in the past three months? if so, remember to check their payments over this period”). Others could handle more of the actual decision-making, but being limited to [certain conditions], borderline cases were beyond them. The technology today is on a different level. It poses new challenges and new opportunities. We’re no longer in a world in which algorithms are used to calculate payments or assist decision-makers through on-screen prompts. We’re now using algorithms to detect (or predict) fraud, employability, recidivism, and so on. And this is happening in both the private and public sectors. There’s a special danger when public agencies rely on AI developed externally, in the private sector. You’d want to be sure that the agency knows exactly what it’s getting, and that unauthorised law-making isn’t happening by stealth or oversight.
How is cognitive science relevant to a discussion of AI?
That’s a big question with a significant historical dimension.
To give you the short version, take the comments of the “godfather” of the deep learning networks being used today, Geoffrey Hinton. He himself recently stated that he’s not really interested in the technology itself so much as what the technology can teach us about the human mind. I think that pretty well sums up the connection between the two fields. Neural nets are, of course, loosely based on and inspired by the neurology of the brain. I’m with Hinton, personally, in that I come at the mind as a kind of computer (in part anyway), in the hope of learning something about how it ticks. Others will come to computers as kinds of minds, in the hope of building interesting gadgets that can do interesting and useful and intelligent things.
Do you think that different AI rules are needed for different kinds of AI? For example, does generative AI require different regulations from AI that makes decisions?
I’m particularly attracted to the small-r republican theory of government for AI regulation, courtesy of [technology and law scholar] Jamie Susskind’s work which first made me think about republicanism’s relevance to the AI sphere.
If you’re a (small-r) republican, you’ll espouse a view of liberty as “non-domination.” This means that you’re free to the extent that there’s an absence of actual and potential interference with your decision-making. It’s not enough for there to be no actual interference with your choices, there should be no potential interference by someone who could interfere with your choices if they wanted to. It’s standard to use the example of a benevolent slave-owner. The slave-owner might be really kind, or maybe he’s very gullible, a total pushover, so that his slaves know exactly how to extract the right concessions from him by saying the right things, cajoling him, and so on. But the slaves still wouldn’t be free so long as their freedom was still subject to the whims of the slave-owner. He’d still have power to interfere with their choices even if he never exercised it. A point that I first learned from Jamie Susskind is that many AI and big tech firms operate like the slave-owner. They may not use their awesome power nefariously (we can only hope that Facebook, for instance, didn’t knowingly abet the spread of misinformation in a targeted fashion to sway the outcome of the 2016 election); but that they could use that power is already something to worry about. It means we live in a world that is now less free than the one when those firms didn’t have this power. A republican-inspired political solution would then regulate to curb the potential for this power to be abused.
What is a leading AI risk that you think can be addressed with the law?
The most obvious one is simply the risk that new tools won’t be evaluated before use, and monitored subsequently in use. That’s an easy one for the law to address through a regime of mandatory pre-deployment algorithmic impact assessment and post-deployment algorithmic auditing and evaluation.
What sort of legal approaches are you keen on otherwise?
In no particular order, the law could help with establishing:
– An oversight body with standards, accreditation, and auditing functions;
– A professional ethics body for computer science practitioners, complete with powers of admission, expulsion, and other sanctions, as well as rules governing ethical and more general professional standards (a code of ethics);
– Enhanced data protections to deal with inferred data (inferences made about data that are provided, in addition to data explicitly provided by individuals) and stricter enforcement of “purpose limitation” (what organizations can do with data after it is provided) to better regulate both data processing and data sharing;
– A mandatory disclosure regime for source code (ideally the disclosure would be to the oversight body I mentioned), including disclosure in a form suitable for “public reason”––so that curious citizens, journalists, judges, and so on, can access these materials and understand them;
– A private sector due process regime;
– Stricter rules around and auditing of targeted political advertising; and
– Possible part public ownership (or enforced restructuring) of the largest tech infrastructures constituting the public sphere (e.g. Twitter, Facebook).
But laws are not universal. How will we address the issue that different countries have different laws?
Right! A lot of the proposals’ implementation will be jurisdiction-specific. But even if you just have the US, the EU, and a modest grouping of Australasian countries with or without China signing a multilateral agreement, I think it could go very far. In the US alone, the founders of most of the biggest companies are located within basically a 100 km radius.
Whichever grouping of nations has the most to gain (or lose) by AI has to get in the same room and coordinate a common strategy. This isn’t impossible. It requires talented foreign ministers and secretaries of state to do the hard diplomacy, fully aware that it may take years to pull off.
Think of the East Asia Summit, the Asia Infrastructure Bank, and other such limited but special-purpose associations and multilateral arrangements arising in recent years. Necessity is the mother of cooperation.