Community Perspective – Karina Vold

Q&A with Karina Vold, AI2050 Early Career Fellow

Dr. Karina Vold is a philosopher of cognitive science and artificial intelligence. She has written thought-provoking articles questioning the nature of human existence in a world humans increasingly share with agents of their own creation.

In one of her most recent articles, Vold compared the excitement over ChatGPT — and its rapid adoption by more than 100 million people — to “children released to play on a new jungle gym — are showing one another (alongside the owners of the software) new and potentially profitable ways of using it.”

“Chat’s potential uses are endless and still being envisioned. But make no mistake about the source of all this ingenuity. It comes from its users — us!” she exclaims.

As an assistant professor at the University of Toronto’s Institute for the History and Philosophy of Science and Technology, Vold is working at the intersection of human and machine thought — how we can be sure that these systems operate safely (Hard Problem # 2) and what it will mean to be human in the age of AI (Hard Problem #10). 

“As AI continues to outperform us in areas such as developing scientific theories and proving mathematical theorems, we risk losing access to this advanced knowledge,” she says. “We must develop strategies that enable us to keep pace with our machine counterparts, to avoid the catastrophic potential of contemporary AI systems overtaking human performance and replacing us as thinkers.”

In other writing Vold has explained how AI may pose an existential risk to humanity; and that it is not necessary for AI-based healthcare to provide recommendations that are explainable, but that explainability would improve the willingness of humans to trust AI recommendations. 

Vold has an edited collection of papers on AI safety at PhilPapers.org. 

 

Learn more about Karina Vold:


As a philosopher of AI, what do you think is the most pressing question facing us about AI that we are likely to solve in the near future?
Karina Vold, AI2050 Early Career Fellow

Perhaps one of the most pressing questions around AI that needs to be solved is the problem of governance and regulations. Currently there are little to no regulations around what kinds of AI systems can be built, who can build them, how they can be used or deployed, what data should be off-limits for training systems, or so on. These are practical challenges that could be addressed by policymakers and governments with the right fortitude, understanding, and collaborative efforts. I hope that we will see this happen in the near future.


Are there important questions that you think we won’t be able to answer until we can discuss them with an AI that is self-aware?
Karina Vold, AI2050 Early Career Fellow

Prima facie, I don’t think that is a good idea to build an AI system that is self-aware in the sense of being phenomenally conscious, or capable of having experiences. Subjects with these types of inner experiences often enjoy certain moral rights — for example, the right not to be subjected to aversive states, such as pain, and/or the right to pursue their own goals. Evolution has already produced many different kinds of biological creatures that are self-aware in this sense, [for example the sophisticated cognitive skills of octopuses, or the appearance of sentience in bee colonies] yet we fail to adequately protect them.

I think our priority should be to use AI as a technological tool to better our own well-being and the well-being of other living species, rather than aiming to make it a living or conscious species of its own.


How do you get technologists to think hard and deep about philosophical questions?
Karina Vold, AI2050 Early Career Fellow

Technologists themselves do not necessarily need to [practice] philosophy in order to create safe technological systems, but they do need to be open to collaborating with experts from other disciplines, including philosophy and other humanities. This might start with a recognition that not every problem can be solved by building a new technology (or designing a new algorithm, say). Not every problem has a computational solution. Furthermore, every technology and every scientific practice is embedded in a society, and both impacts and is impacted by that society.

Recognizing these realities is an important starting point for appreciating the need for interdisciplinary teamwork and problem-solving.


Do philosophers in this area need to understand AI technology, or can they get most of what they need to know by watching science fiction movies?
Karina Vold, AI2050 Early Career Fellow

I think that it is important for philosophers of science and technology to have a certain level of understanding of how the science and technology that they’re writing about works in practice.

In the case of artificial intelligence, humanities scholars do not need to be experts at programming, coding, or software engineering, but will require a deeper technical understanding of how the technology works in order to appreciate the nuances of ethical issues that emerge from the applications of AI, as well as potential technical solutions that might be available.