The history of powerful technology is punctuated by geopolitical competition. From the development of nuclear arms to the space race, international rivalries spur nations to push technological boundaries—sometimes at the cost of safety. As China positions itself as an ascendant power and the U.S.’s most prominent geopolitical rival, will this trajectory repeat itself in the context of advanced AI? Though some researchers see China’s rapid technological progress as a race towards AI-catalyzed catastrophe, Jeffrey Ding believes that this conclusion is not inevitable, or as straightforward as it might seem.
“The motivation for this project was this puzzling finding that in…high risk technology domains, including civil aviation and nuclear power, China has achieved a remarkable safety record,” says Ding. “[China] often leads the world in a lot of these metrics.”
Jeffrey Ding is a 2023 AI2050 Early Career Fellow and Assistant Professor of Political Science at George Washington University. His book,“Technology and the Rise of Great Powers: How Diffusion Shapes Economic Competition”, about how technological revolutions affect global distributions of power, was published in August 2024 by Princeton University Press.
Ding’s research agenda covers emerging technologies and international security, the political economy of innovation, and China’s scientific and technological capabilities. His AI2050 project aims to investigate factors that enable or hinder international cooperation in managing risks from powerful technology. By understanding how international cooperation shapes China’s approach to managing risk in nuclear power, synthetic biology, and space technology, Ding hopes to extrapolate conditions that will facilitate U.S – China cooperation over AI safety and security. This research addresses Hard Problem #8, which concerns geopolitical competition and cooperation on AI.
The AI2050 initiative gratefully acknowledges Fayth Tan for assistance in producing this community perspective.
When it comes to national powers and technology, the dominant narrative tends to be competition between nations. In this context, what does “competition” refer to?

Competition in a technology like AI, which is so general purpose, can take many forms. The most direct form of that competition comes from emerging technologies in military capabilities. If one country gains a disruptive edge with a new technology, then they gain a decisive advantage in terms of military power—the most obvious example of this is the atomic bomb.
For AI, one of the most important forms of competition is economic. Past general purpose technologies have inspired huge waves of productivity growth, and they’ve affected which country is able to sustain economic growth at higher rates than their rivals. There’s also that thread of great power competition in terms of whether the U.S. or China can attain or maintain economic leadership.
Your project focuses on cooperation instead of competition, which is somewhat unusual. Why do you think competition is the default framework when it comes to geopolitics and technology?

When we think about protecting national security, we naturally gravitate towards the way the US can ensure its national security—outcompete its rivals and make sure that it continues to be the number one technological power. There’s a lot of incentives within government and national security circles dedicated towards that mission of outcompeting our rivals. I think it’s easier to imagine scenarios of conflict leading to risks to national security than it is to imagine things like accidents and miscalculations leading to threats to national security. That’s my initial analysis, at least.
One assertion you aim to investigate is that “the danger of AI accidents is the most severe in China.” What was the rationale for this conclusion and why do you think it merits closer investigation?

The qualities associated with good governance of high-risk technologies like nuclear power plants or chemical production plants are: strong democratic institutions, a free and robust press and civil society that can provide these safeguards against governments or corporations that try to hide safety violations in these high-risk industries.
That combined with the historical example of Chernobyl happening in the Soviet Union, and the Bhopal disaster happening in an emerging economy like India—China is both a developing country where technological advances may outpace regulation, and also an authoritarian country where we might lack democratic institutions that safeguard against accidents. I think that’s what leads to predictions and assumptions that a country like China will be the most likely source of an AI accident.
It’s a surprising finding that, at least in some of these high risk technologies, our expectations about China’s safety record don’t necessarily match the reality on the ground. The U.S. Federal Aviation Administration, when it was trying to help India improve its aviation safety regime, pointed to China as a model. The project is trying to examine how China has been able to achieve these safety gains in other high risk technologies, where we’ve seen major accidents in other countries.
When you refer to cooperation between great powers, does this refer to cooperation at the governmental level? Or does this extend to other sectors as well?

We often assume that the most important cooperation among great powers happens at that inter-governmental level. But in nuclear safety and civil aviation safety, a lot of the important work was being done through international industry associations.
When we think about businesses getting together and trying to self regulate, sometimes it’s greenwashing or a public relations exercise. One of the initial hypotheses I’m exploring is that in industries where companies have this shared reputation on safety issues –where one airline’s crash is going to affect the safety reputation of every airline around the world – these international industry associations have a really strong incentive to try to raise the safety performance of the weak links in emerging economies.
Some of the characteristics used to assess risk is a democratic society and the presence of a robust free press. Does your research also look at “bottom up” motivations to cooperate?

What we’ve found, at least initially, is that these international industry associations work with very limited transparency, which makes it hard to research. This is in part because they’re trying to avoid some of these bottom up processes and scrutiny from external stakeholders like the general public, the media, and advocacy groups. They want firms to share incidents, share their safety, performance data, in these closed settings where other firms can apply peer pressure. But they wouldn’t want that information to be disclosed to outside stakeholders, because they’re worried that it gets misinterpreted and paints the entire industry in a bad light.
To give an example, if you’re a poorly–performing nuclear power plant on safety issues, you might might not want to submit your data for benchmarking to see where you rank in comparison to your peers if you know that data is going to be disclosed to the general public, but you will be more willing to do that in a confidential setting where you’re only being judged by your peers. It’s trying to strike a very difficult balance, because we know in all these other governance settings, public transparency is very important. In a lot of the other settings, we rely on public naming and shaming to encourage firms to do better on things like sustainability. But at least in some of these high risk technology domains that I’ve studied, it seems like private naming and shaming is a more effective mechanism.
I think one key difference between nuclear technology and AI is that the risks of AI are a lot less tangible or well-defined. Do you think that this will be a challenge to cooperation?

It’s difficult to do proactive and preventative work on AI safety issues in part because we don’t have a concrete scenario of what an AI accident looks like. Thankfully, we haven’t seen that connected to a severe loss of life, which historically has been the main tipping point for new governance initiatives in other industries. For example, in nuclear safety, the World Association of Nuclear Operators forms in the wake of Chernobyl. Fortunately, we haven’t seen an AI accident on that level, but I think that that makes it hard to do this type of proactive and preventative work.
I think the thing that’s consistent across safety issues across all these domains is a clear concern that humans lose control of these technologies—we lose control of a nuclear power plant, we lose control of a plane, we lose control of hazardous chemicals. I think there is a very real concern that we lose control of highly powerful AI systems. OpenAI had an example of CoastRunners, a game for an AI agent to complete a boat racing exercise and collect as many points in as little time as possible. The reinforcement learning algorithm behind this AI system learned that the best way to gain points, as specified by human designers, was to repeatedly crash into a series of three buoys. You can imagine this flawed reward specification on the part of human designers translated into a real world application, and the potential accident risks.
Another challenge that you articulated in your proposal was that cultures around safety in countries could vary. Could you speak to specific examples of that?

One really important part of safety cultures is the culture around reporting incidents, and the reporting procedures for when things go wrong. That has been an essential part of the US in improving the safety record of civil aviation, because operators and technicians and pilots are encouraged to and protected when they report that something has gone wrong. In other safety cultures, that’s more of a challenge. In China, that was one of the main roadblocks to improving aviation safety—people were afraid of being punished for reporting that something had gone wrong or that someone else had not followed the correct procedures. In some of the initial interviews I’ve done on this subject, the way China got around that was to basically implement more punishment for not reporting when something goes wrong.
They got to the same end result of a very rigorous reporting standard when it comes to aviation incidents, but through different means and a different set of norms. That’s what I mean by different safety cultures—the norms, the standard set of procedures, the overall environment surrounding the organizations tasked with managing accidents in high risk technologies.
You write a newsletter, ChinAI, where you translate Chinese writing on AI. What was your motivation in starting that newsletter?

I was just beginning to research China’s AI development in 2017, which feels like a lifetime in AI years. I realized that when trying to understand China’s AI development strategy, there was so much work being published in Chinese—there was a 500-page book published by a key government-affiliated think tank and Tencent Research Institute, which is the think tank associated with one of China’s biggest tech giants. Nobody had read through this 500-page book on China’s AI strategy from some of the leading Chinese thinkers on the subject!
I started sharing snippets of that book in emails to colleagues at the Center for Governance of AI at Oxford University, where I was at the time, and there was a lot of interest. It started from an email to a few colleagues and their friends, and it’s really snowballed from there.
Have you translated anything for the newsletter that you or your readers found particularly surprising?

There are a lot of groups in China trying to develop their own evaluations for AI safety issues, such as how to even determine if a model has a capacity to potentially lie to its human users. I think a lot of readers might have assumed that that type of work was not being done, that there’s a perpetual greenlight for moving fast on technology in China. That’s been a surprising finding—there is growing momentum towards taking AI safety concerns more seriously in China.
Some of the things that you've translated are from ordinary people in China and their opinions of AI. Why translate these everyday anecdotes?

The simplest answer is that it’s much more interesting to read blogs and magazine articles than it is to read white papers and government documents! I still translate a fair amount of white papers because they provide useful information on industry trends and government regulations. But it’s so much more interesting to read what people like me, who happen to live in Hangzhou, read, or want to read, on a daily basis about the AI ecosystem.
The more abstract answer is that I think to fully understand what is happening in China, you have to go beyond the party state and the government. If you only focused on the Chinese government, and our expectations about the Chinese government, it would be hard to even think about how China became a leader in civil aviation safety. It’s trying to go beyond this abstract monolith of China. Who are the individuals? Who are all these different types of groups? What are they doing? What are they thinking about? What are they reading? That’s the motivation that connects both the newsletter and this research project.