Community Perspective – Stephanie Dinkins

Q&A with Stephanie Dinkins, AI2050 Senior Fellow

How are AI systems built? For Stephanie Dinkins, the question evokes not datasets and functions but rather the principles that underlie the construction of AI systems, and how those principles shape the ways in which people interact with them. 

“Can we create data and AI systems of care and generosity?” Dinkins asks. “Because these systems are systems we’re all living with, under, around.”

Stephanie Dinkins is a 2023 AI2050 Senior Fellow and transmedia artist who creates experiences that spark dialogue about race, gender, aging, and our future histories. She exhibits and publicly advocates for inclusive AI internationally, and is Kusama Professor of Art at Stony Brook University. 

Dinkins’s AI2050 project, Expanded AI Narratives, is a collaborative, process-based series of art installations, documentaries, and dialogues that envision what an AI ecosystem built on equity might look like. It invites people from diverse walks of life to impart stories of their choosing to an AI system, allowing participants to take on an active role in shaping the system’s understanding of the world. By increasing participation and inclusion in AI, it addresses Hard Problem #7, which concerns the responsible development and use of AI. 

Introducing stories that capture the complexity of human experience into AI development—in all their contradictions, conflict, injustice, and prejudice—might seem daunting. Even humans often fail to reconcile the messier, uglier parts of our own experiences. But to Dinkins, this only highlights the importance of this endeavor.

“I don’t want to acknowledge the alternative of not seeing complexities. Without that, you can just see a tunnel [that’s] getting thinner and thinner, and we have to go through a smaller and smaller space where everything looks and feels the same— some of us are never going to meet the standard,” says Dinkins. “How do we function in such reductive technological ecosystems? For me, one solution is to try to build in complexities.”

The AI2050 initiative gratefully acknowledges Fayth Tan for assistance in producing this community perspective.


When most people see the words “AI” and “art” together, they tend to think that AI generates the final product. But in your work, AI is part of the process. I was wondering if you could talk about your work from that perspective?

I started working with AI back in 2014 or so. AI has always been process based for me, about trying to figure it out, about seeing how it works—then about building with information ideas that seemed precious or community-sustaining to me. What that means for me now as these generative systems come along is, well, how do I collaborate with the system? How do I not only type in a prompt and accept what comes out, but create prompts through process and see how far I can push them. And that might mean pushing machine learning systems via the prompt, or trying to tinker with the backend software. These days, that means a lot of different things—iterative process-based prompting, interjecting code, ignoring generally accepted computer science conventions or trying to otherwise manipulate a system to get it to serve my needs better as opposed to accepting generative systems as tools to be consumed as is.


For people without an art background, I think that concept might be a little difficult to grasp, because people aren't really exposed to the process of creating art, and how quite a lot of art pieces are their process. Could you speak to the idea of “playing in the space” [with AI] instead of using it as a tool?

Let’s take a project of mine, Not The Only One, which is a chatbot. When I started, I thought it was going to be an AI entity that was a memoir of my family—we did oral histories, got data from the family, and made a very specific dataset. We are a Black family, and when I looked at foundational data that was available to me, I did not feel I could put my family’s history on top of [that] data. What that meant for me was doing some research about available datasets. What biases do they contain? Are they good enough to support something so rich and dear to me? People suggested things like, well, you need a really large dataset, try Wikipedia. But for me, Wikipedia is a space that gatekeeps in very specific ways and holds knowledge in a very specific way. 

Then the process became, how do you move this project forward in an environment that you don’t feel is good enough for your Black family’s history to sit atop? And that started me on the process of building some kind of dataset, knowing that I’m never going to build one large enough to make the project seamless—but feeling that the attempt is worthwhile. I pulled together data from different sources that touched my family in one way or another, or that we touched, and made this wonky dataset of 80,000 lines of information, which is not enough. 

That means I never got my true memoir of my family. I could fix it with technology available now—but that’s not what’s interesting. When I put this project in public, the ways in which people interacted with it were really different from the ways that we interact with something like Siri or Google Home. People seem to recognize that it was “trying” to answer them and that it couldn’t quite—they would try to coax it and coach it, and they showed it a lot of grace. I thought that was an interesting space—children often yell at Siri and demand what they want from it. The use case becomes this counterpoint: What does it take for us to develop and nurture technologies that engender support?

I started asking computer scientists—why can’t we build care and generosity into an AI system? Why can’t an AI system be based on love? Their reply is often a wry chuckle. But we’re pretty good at building systems that are punitive, or systems that make money for us. Why is the opposite not possible? Why is that laughable? For me, the process of building makes questions that help improve the thing that I’m making. I don’t necessarily mean making it more ordinary, purely functional, but making it more of a thing that might point away—to something different.


You draw on communal knowledge, like oral history and folklore and personal memoir in your work. Why do you think it's important that AI systems engage with that type of knowledge?

When I first started, it was about trying to make representation in the systems that reflected my family and my culture in a real deep way, versus the kind of politically–correct flattening that often happens in AI systems. It became about, well, what do we know? What is important for an AI system to know about the communities I care about so that they might care about these communities better? 

That comes to what I’m working on with AI2050, [about] these new AI narratives. Our stories are our algorithms, and they’ve [been] with us for millennia. We’ve repeated those stories, so they get ingrained into us as systems. That started me thinking—what stories are we telling our machines? What are the more nuanced, more detailed—I’m going to use the word “true”, although I don’t like the word at all—but let’s say…community-centric, deeper stories that can inform a system [such] that it can understand a community better?. I have a vision of a wide variety of communities doing the work to get the intelligent systems to have deeper knowledge of their ideas and understanding of the world, their ways of being.


[The interactions in] your work push beyond this idea that you can only consume the output of AI, into taking a more agentic role. Not necessarily being in control, but feeling more like an equal partner in a dialogue. Why is feeling like you are an agent, that you have autonomy, so important in these interactions, particularly in [the context of] marginalized communities?

I hope to build this idea that agency is possible. I’m not saying the most agency, but some. Otherwise, you’re living in an oppressive society, where it feels like you can’t do anything to change anything, which leads to hopelessness, which leads to disenfranchisement. Which leads to not participating in a society, or participating only to the extent that you’re allowed, versus taking on your full power. 

I want to build this idea, especially in Black communities, that there are pressures from all different directions, but we have a certain amount of power and agency that we can express. It might not be direct in terms of how it changes the system. It might not be 100%. But we have to see ourselves as entities with agency in the technological futures we are currently creating at exponential pace.  

I always say I’m dealing with my grandmother’s philosophies. She was a Black woman born in 1913, who had to make space for her family in a hostile world. She tried to make strategies of building community with folks who were not necessarily eager to build community with her. Sometimes that meant enticing them using her garden. Sometimes that meant running around them. Sometimes that meant being headstrong, thinking,” I can change your mind.” Her example demands considering a multitude of paths.

People and the media are often telling us that we’re just going to be dominated by advanced AI. What does that do to our incentives for living and going on? Maybe that’s a space we can work, and make more examples of what could be. As a black woman, I often receive the message that the world is against me and I might as well not try. That’s just one version of a desperate, sad world we often reinscribe consciously and unconsciously. As a society, we regenerate negative, bias – laden stories that demoralize people instead of empowering them. So I’m going to at least try to empower people with stories, and encourage others to gift some of their specific, often overlooked narratives, to the AI ecosystems we live in to help them become more supportive of the global majority. Let’s use this technology to try to create that space.