Community Perspective – Baobao Zhang

Q&A with Baobao Zhang, AI2050 Early Career Fellow

Today there are a growing number of ethical principles related to AI, released by governments, firms, and civil society groups. Nearly all agree that AI systems should benefit the public while minimizing potential harms, and there is broad agreement that some kind of regulation is needed. But who is there to represent “the public” when leaders in government, industry, and academia start to hammer out those rules?

One approach being tested by AI2050 Early Career Fellow Baobao Zhang is to ask everyday people from diverse backgrounds in a systematic, formalized method called “deliberative assemblies.” People will be chosen at random and offered an opportunity to learn the basic policy questions surrounding AI regulation and then asked for their opinions. This work addresses Hard Problem 9 (concerned with governance and our ability to harness AI for the benefit of society).

“Participants will learn from experts and deliberate issues related to high-risk AI systems and applications,” says Zhang, an assistant professor of political science at the Syracuse University Maxwell School of Citizenship and Public Affairs. As Zhang noted in summarizing previous research, ‘respondents in different regions and cultures have divergent preferences regarding autonomous vehicles’ behavior in moral dilemmas’’ – for example when a car would have to choose between avoiding a collision with a child running across the road, avoiding innocent bystanders, or prioritizing the safety of the driver. This research ‘complicates AI ethics by revealing that consumers and voters disagree about how AI systems should be developed and deployed.’

Many of the thorniest questions involving the deployment of AI don’t have answers that can be answered using science, math or economics — these are questions that are based on each person’s ethics and values. “Developing AI to benefit the public should involve citizens as key stakeholders in shaping the future of the technology,” she says.

Although governments and private organizations have long used surveys to measure public sentiment, most members of the public don’t have the expertise, time, or necessary background information to make deep, reflective decisions. Some organizations also use focus groups, in which individuals are brought together with a moderator for a few hours — and are typically paid a small amount of money for their time. Zhang summarized some of the previous work to measure the public’s opinions towards artificial intelligence in a chapter that she wrote for the Oxford Handbook of AI Governance.

Zhang’s deliberative assemblies are similar to focus groups, except that individuals who participate are committing to a sustained engagement over the course of several months. This gives the individuals involved an opportunity to develop a deeper understanding of AI and the related issues. It also helps create a psychological space in which the participants feel sufficiently at ease to share their thoughts. Participants will be paid a stipend for completing the 40-hour public assembly, and benefit from additional resources such as high-speed internet, loaner laptops, and childcare funding support, to make participation accessible.


Learn more about Baobao Zhang:

Not many people have heard of deliberative assemblies. How old is the approach, and how widely are they used?
Baobao Zhang, AI2050 Early Career Fellow

Deliberative democracy is an approach to political decision-making that emphasizes deliberation and discussion among citizens rather than simply voting. The origins of deliberative democracy can be traced back to ancient Athenian democracy, although only a small proportion of the city-state that counted as citizens could participate in deliberations. However, the modern practice of deliberative democracy dates back to the 1980s and 1990s, when political theorists such as Jürgen Habermas and James Fishkin began to develop the idea in more detail.

One of the key components of deliberative democracy is the use of deliberative assemblies, which are groups of citizens who come together to discuss and debate a particular issue. These assemblies are typically randomly selected from the population, in order to ensure that they are representative of the wider community. The goal of these assemblies is to encourage citizens to engage in informed and respectful discussion, with the hope that this will lead to better public policy outcomes. Deliberative methods have been used in 27 countries across a range of policy contexts, such as public health, automation, urban planning, climate and the environment, and digital technologies.

How do you recruit people? Is there a way for someone who reads this Q&A to sign up?
Baobao Zhang, AI2050 Early Career Fellow

We plan to recruit participants by first sending out a recruitment survey to a large number of respondents randomly selected from the US adult population. Of those who volunteer to participate in the public assembly, we randomly select the respondents using an algorithm that balances between equal probability of selection and representativeness according to key demographic variables, such as gender, race/ethnicity, and geographic region.

How much education or training does a person in the assembly need in order to make a good decision?
Baobao Zhang, AI2050 Early Career Fellow

We plan to have eight expert witnesses with expertise in machine learning as well as AI ethics and policy to present to the assembly participants. The participants will be provided a general overview of what AI is (including generative AI) as well as information about AI systems involved in three domains: internet search, face recognition and biometrics, and health care. The expert witnesses will write short memos for the participants, present to the assembly participants, and answer questions in a Q&A session.

Do you test assembly members before you allow them to vote?
Baobao Zhang, AI2050 Early Career Fellow

No, we do not test assembly participants as a requirement for them to vote on policy recommendations. We will conduct surveys before and after the members participate in the assembly to assess whether the assembly increased participants’ level of knowledge about AI.

What’s the mechanism for output of the assembly to inform policy?
Baobao Zhang, AI2050 Early Career Fellow

We plan to produce a public report based on the findings of our deliberation. The report will detail the type of AI risk framework that the public assembly participants recommend the US government should take. This report will be widely distributed to policymakers, researchers, and other stakeholders in the field of AI policy. We will also host a public launch event in Washington, DC.

ChatGPT seems pretty good about coming up with policy pronouncements. Why can’t we just use AI to tell us what our community norms should be?
Baobao Zhang, AI2050 Early Career Fellow

While AI and machine learning can be useful tools in public policy, they cannot replace human decision-making entirely. One of the key challenges in AI policy, particularly in a country like the US, is that people from diverse backgrounds can have conflicting views on what is best for society. In order to create constructive solutions, we need to engage in dialogue and deliberation.

Deliberative assemblies provide a space for citizens to come together and discuss complex policy issues. By bringing together people from diverse backgrounds, we can better understand the issues at hand and work towards solutions that are acceptable to a broad range of stakeholders. Additionally, deliberative assemblies can help to build trust between citizens and government institutions, which is crucial for effective policy-making.