When Kai Wang first started graduate school in computer science, he didn’t think he’d be improving maternal health in rural India or developing citizen science initiatives to prevent pollution in Taiwan. But after a chance introduction to nonprofit work, he became convinced of the importance of collaborations between technologists and nonprofits working on social issues.
While the prospect of working on unfamiliar, complex challenges, often with limited resources, might overwhelm most technologists, Wang isn’t intimidated.
“It’s an opportunity to design AI solutions to align with challenges in the real world,” says Wang.
Kai Wang is a 2023 AI2050 Early Career Fellow and Assistant Professor in the School of Computational Science and Engineering at the Georgia Institute of Technology. His work was recognized in 2021 with a Siebel Scholar award, and a runner-up placement for Best Paper at the Association for the Advancement of Artificial Intelligence.
Wang applies his background in multi-agent systems and computational game theory to developing AI solutions in collaboration with nonprofits working on health and sustainability. His AI2050 project explores how machine learning can be used to inform real-world interventions for social issues. This work exemplifies Hard Problem #4, which involves using AI to solve humanity’s greatest challenges.
The AI2050 initiative gratefully acknowledges Fayth Tan for assistance in producing this community perspective.
What social issues do you address in your work?

During my PhD, I focused on how to use AI to help with societal challenges, including wildlife conservation, environmental sustainability, and public health. I collaborated with nonprofits, NGOs, or government organizations to design AI solutions that align with existing social interventions.
One example is a collaboration with an Indian nonprofit dedicated to maternal health, ARMMAN. They increase awareness of maternal health by sending automated messages providing information about preventative care to support mothers through pregnancy.
We used demographic features from mothers to decide how we should send messages and which order they should be sent. Additionally, we developed an AI model to predict which subscribers were at risk of dropping out of or losing access to the program, prompting us to take certain interventions to prevent that from occurring. An example of this is directly scheduling a phone call or physical visit with someone identified as at risk for dropping out, such as mothers living in rural areas. Since 2022, this system has reached over 350,000 people.
How did you first get interested in using AI for socially impactful issues?

Originally, my focus was on game theory, modelling the interactions between different agents. My advisor introduced me to ARMMAN, as the large-scale mobile health problem is also a multi-agent system—there’s us, the nonprofit staff, and the mothers we’re working with.
Once this collaboration began, however, I found that the cultural background and social context mattered more than the technology itself. If we improved the algorithm, but did not align with the cultural background or follow existing interventions, it wouldn’t improve outcomes.
One [influential] experience was during my field visit to the nonprofit’s office. Previously, we thought that we could just send messages before and after delivery. But the nonprofit staff told me we needed to pay attention to the delivery date itself, because mothers traditionally return to their parents’ house during the last month or two of pregnancy, and only come back about a month after the delivery.
I thought this wasn’t an issue at first, but the population we were working with are usually low-income, with a husband and wife sharing one cellphone between them. When the mother goes back to her parents’ home, the cellphone will be with her husband, and we wouldn’t be able to reach her directly. Knowing the delivery date would allow the nonprofit to ask the mother for her parents’ phone numbers ahead of time so she’d still be reachable. This experience showed me the importance of social and cultural context.
How do you establish collaborations with organizations working on such complex or sensitive issues?

Sometimes nonprofits don’t have the resources to implement AI systems, and may not have the right understanding of how AI can support their work. On our side, we may misunderstand the social issues they work on.
It’s important to establish a common language and common ground between different parties. We’re not pitching our algorithm to them, but trying to design the algorithm to align with their needs. Both parties need to keep the conversation open so that we can design impactful AI solutions.
What obstacles do you encounter in trying to bring AI technology to nonprofits?

Nonprofits are often resource-limited, in terms of the technology available and also not having enough human resources to support their community.
On our end, we want to design the algorithm such that it works under these constraints. For example, we’re working with Taiwanese nonprofits on migrant worker health—it’s hard to reach migrant workers, and nonprofits don’t have enough resources to support all of them. We’re also discussing how automated messaging can support migrant workers in the US, who are a majority Spanish-speaking population. In computer science, we might view these conditions as limiting, but I see them as necessary properties to incorporate into algorithmic design. If an algorithm doesn’t work under real-world conditions, then it cannot be a real solution.
Do you have advice for students who want to apply technical skills to social issues?

Get more exposure and also friends from different fields! If I didn’t have friends in sociology, I wouldn’t be able to connect with nonprofits that they work with. Having connections with different fields gives you future opportunities. Also, in a few years, your friends will become experts in their field. They’ll be able to teach you about the challenges in their field that will facilitate interdisciplinary work.
Another piece of advice is to not treat the obstacles as obstacles, but as opportunities. We perceive resource and technology constraints as obstacles from the computer science perspective, but that’s the reality of the situation. This is an opportunity to design solutions that actually work for the challenges they were designed to solve.
Your work is motivated by a lot of openness—to learning, to new experiences, and to the needs and lives of other people. Can you speak to its importance?

Being open-minded means you’ll always encounter new things, even in the same work. I also self-reflect on my own biases and assumptions, and I think curiosity moves me to understand the knowledge created by other fields. Relatedly, I don’t think computer science is the major force pushing AI for social impact forward, but the fields that we’re collaborating with. Our job as computer scientists is simply to help them.