Community Perspective – Huan Zhang

Q&A with Huan Zhang, AI2050 Early Career Fellow

Huan Zhang received his AI2050 Early Career Fellowship while a postdoctoral researcher at Carnegie Mellon University, and commenced his AI2050 project when he became an Assistant Professor at the University of Illinois Urbana-Champaign. He is working on Hard Problem #2 (related to issues of AI safety, security, and trust).

Zhang’s research uses mathematical approaches for creating trustworthy AI systems for mission-critical tasks such as guiding vehicles, controlling chemical plants, or dispensing medications. Academics call these approaches “formal verification techniques.” In practice, the idea is similar to writing a mathematical proof for each AI system that proves how the system will behave under a wide range of inputs.

Zhang is leading the development of α,β-CROWN (alpha-beta-CROWN), a program that can verify the behavior of neural networks. The program won competitions in 2021 and 2022. 

Learn more about Huan Zhang:

Your web page says that α,β-CROWN can verify neural networks with millions of neurons. Is that big or small?
Huan Zhang, AI2050 Early Career Fellow

It is rather big, considering the hardness of this problem. Verifying a neural network requires us to find rigorous proof to show that the neural network’s behavior satisfies its specification. For example, a formally verified aircraft control system can guarantee that the flight under control is safe under the presence of a small amount of unanticipated deviations in sensor readings. It’s a pretty challenging mathematical problem. Usually, one neuron becomes one variable in this problem, and now imagine you want to write a mathematical proof involving millions of variables. Five years ago, we could only verify neural networks with a few hundred neurons, but now we are a few orders of magnitudes better, thanks to the novel and efficient algorithms my collaborators and I developed during the past five years. Although I am still working on tackling even larger neuron networks, the verification of neural networks with millions of neurons already enables practical usage of formal verification for many mission-critical AI applications. For example, some of my recent collaborations with domain experts involve verifying neuron networks in robotics, control, medical, and aerospace applications.

How would a company use α,β-CROWN to verify the neural network used to drive a car? Would the program run in the car, or at the company that’s making the car?
Huan Zhang, AI2050 Early Career Fellow

In typical settings, the company that makes the car runs the verification algorithm to guarantee the safety or correctness of the neural network before they put it into production use. As an analogy, we can treat a neural network on a car as software, and it needs to be thoroughly tested and verified before being delivered to the end user, just like how companies develop other software systems.

Formal verification seems like something that’s really important. Do you think that everyone working in AI should be using it?
Huan Zhang, AI2050 Early Career Fellow

Formal verification is really important, but obtaining the formal verification guarantees is also a very involved process. In non-critical systems, such as using AI for ads and products recommendation, formal verification is an overkill. However, if we are making an aircraft or a medical device, verification is essential for these systems due to safety needs or regulatory requirements. The lack of formal verification tools for AI would prevent the usage of AI in these mission-critical systems.

Should AI developers know about it? And if so, then what should they know?
Huan Zhang, AI2050 Early Career Fellow

AI developers should know that formal verification is an essential tool for building trustworthy AI systems. Broadly speaking, AI developers should be aware of the trustworthiness issues (such as robustness, safety, and fairness issues) in AI models, and should not blindly trust AI models. For mission-critical systems, formal verification methods are essential to prove their trustworthiness rigorously.

You also helped organize both the 2022 Workshop on Socially Responsible Machine Learning at the International Conference on Learning Representations (ICLR) and the Workshop on Trustworthy and Socially Responsible Machine Learning (TSRML). Both workshops focus on issues like bias and fairness in machine learning. Is there a role for formal verification in helping to address those topics?
Huan Zhang, AI2050 Early Career Fellow

These workshops focus on raising awareness of issues when applying AI to scenarios with high social and ethical stakes, and encouraging researchers to develop novel methods to address these issues. Formal verification plays an important role here because it is a very powerful mathematical tool to provide rigorous guarantees for various behaviors of a machine learning model, including fairness, robustness, and safety, which are highly desirable when applying AI to sensitive and human-centered applications. The workshops include a few papers using formal verification methods to tackle the challenge of building trustworthy and socially responsible AI.