Bellagio Conversations in AI/

Stephanie Dinkins on AI, Artistic Expression, and Making Criticism Count

As an award-winning transdisciplinary artist, Stephanie Dinkins engages with new technologies like AI as they impact our thinking about race, gender, disability, society, and history. If a technology isn’t equitable, can we still use it for equitable ends?

Stephanie’s Bellagio Reflections: “The project I was, and still am, working on is an app that collects and analyzes data in a non-conventional and open way. My focus was on figuring out how to format and collate information such as photographs or stories so that they could become useful data for me and others. I was contemplating cultural touchstones, and how important they are for people to define themselves in the AI ecosphere.

“In group situations, I often observe more than actively talk. But we were a very interesting cohort of residents, and those observations and conversations were both super important to my process. There were people in the field of AI – like Mimi Ohuoha and Diana J. Nucera, a.k.a Mother Cyborg – but there were also folks who were not, like Sherrilyn Ifill, who’s writing a book about civil rights. We had similar areas of interest, expressed through different mediums and outlets, and it was fascinating to compare our approaches.”

Stephanie argues that the need to be critical of AI systems for their lack of inclusivity shouldn’t preclude engaging with them.

As a citizen, I’ve been looking at AI and data since 2014. I came across things within society that made me ask questions about AI systems, and eventually I started to ask how we get these systems to recognize the people who aren’t the main contributors – that is, the people whose data is extracted by a system that is reading and surveilling them, but over which they have no purposeful control. I had questions upon questions about what regular folks might be thinking about AI. Now, though, more recent developments like ChatGPT have made me realize that the question has changed. Now, how do we get people to recognize what’s possible?

I’m part of the DISCO Network, which stands for “digital inquiry, speculation, collaboration, and optimism.” We’re a group of artists, scholars, and other practitioners concerned with racial inequality, histories of exclusion, disability justice, and techno-ableism. We’re folks of color – disabled folks, mostly – and we’ve all realized that AI isn’t about us. At the moment, we’re trying to collaboratively write a book about AI, and through that process I’ve realized that we need to find ways to reorient the narrative. I get the idea of criticism, and holding the feet of the people who make these systems to the fire, yet there are ways we can have impact now that seem to be overlooked.

Initially, we just wanted to get around some of the system’s gates – for example, by using Do Anything Now, or DAN, prompts to get ChatGPT to speak in a Black vernacular as authentically as possible. But it can’t only be about expecting to be represented and criticizing the fact that we’re not. How can criticism alone push the system to act differently? The strange boon of not being represented is that it forces you to think outside the box, which creates innovation that leads to progress. It’s still true that we need to ask for more depth, but we also need to really push emerging technologies, because there are possibilities available to us already.

For example, I’m always thinking about the idea of care, and the data we create to help support the care system. So how do we, regardless of who we are, start to inform that? To me, that means engagement with the system. We know that these AI systems in particular are acting differently than the technological norm and being rapidly impactful. Change is a constant companion, so let’s flow with that change. Ingenuity can be quite powerful. Criticism alone runs the risk of increasing the gulf between what’s possible and what you could contribute. I believe we should do what we can from wherever we are. I don’t want the protest about what it might be doing to drag us down from the opportunity to change it.

Of course, we know that these systems are not necessarily “on our side,” but we also know what happens when creative people who need something to happen get a hold of, and start using, powerful technologies in ways that really serve their communities. It changes them.

  • If we want AI to benefit most people on the planet, we have to take on the difficult task of fully recognizing the inherent value of all people and work towards the resolution of, or make peace with, our differences. Only after taking on that grand challenge can AI technologies even attempt to work for the entire human family.
    Stephanie Dinkins
    Transdisciplinary artist; Professor of Art at Stony Brook University

As an artist who’s playing with some of these generative systems, it’s interesting to run into situations beyond the bounds of the “content policy” put in place by widely available generative systems. Various companies seem to be trying to curtail some of the biases people have been advocating against, like the way AI systems represent Blackness, and trying to insert equity. It’s a clumsy process. It feels like they’re just taking a giant ax to the problem. They decide, “Okay, this term or idea is not allowed, that’ll make the equity people happy.” The problem is the fix is so broad that it feels like it’s actually truncating history, especially within systems that regenerate and re-inscribe inequity themselves.

If I were a maker of one of these generative systems, I’d be very specific in the ways I would try to make those systems more equitable. For instance, if we said, “Every use of the n-word is not allowed,” how would we historically refer to what has been done and said? With policy, the tendency is to make small, piecemeal rules that end up not holding in truly significant ways. Instead, it would be useful to ask people, “What does AI need from you?” That way, we can factor in not only what that technology is, but how it’s impacting society globally – because our international boundaries are still there, but these systems are bigger than that. AI is global.

One thing that concerns me is how distracting and distracted this subject can get. We’re pointed towards all these ways that technology feels pressing, even threatening – it’s taking jobs, for example. By hyper-focusing on that, we get pulled out of the base foundational ideas that we urgently need to focus on. How do we keep our eye on the prize of what’s really going on at the fundamental level beneath these technologies? How do we not get distracted?

Perhaps that’s where the public sector comes in. I would urge the public sector not to close its eyes, and to try to understand this technology at least a little bit. There are enough systems around that you can play with. But I think that we need to think about what change means to us. I’ve come to the conclusion that continuous learning is no joke, and that we’re going to have to be fluid in order to find ways to not only be survivors within this system, but thrivers.

That doesn’t mean not fighting the threatening thing that is coming directly at you, but rather finding ways to cooperate and adapt so that, at the risk of becoming obsolete, you still have a place in this world. We can see obsolescence coming at us in many different ways. How do we craft our responses in ways that position us to benefit, instead of being sidelined? And how do we not get stuck in the loop of fighting AI technologies until we relent, and admit that we’ll have to figure out another way?

I’m always about that – about not getting so far behind that you can’t catch up again, about taking advantage of opportunities when and where they are found, and about bending tech to serve the global majority well – even when it seems untamable.


Explore more

Stephanie Dinkins is a transdisciplinary artist based in Brooklyn, New York, and a professor of art at Stony Brook University. Her work often focuses on technologies such as AI as they intersect with race, gender, and our future histories. She is particularly focused on working with communities of color to co-create more inclusive, fair, and ethical AI ecosystems. In May 2023, she was the inaugural winner of the LG Guggenheim Award, which recognizes artists working at the intersection of art and technology. She was a resident at The Bellagio Center in 2022 with a project titled “Binary Calculations are Inadequate to Assess Us: Data Commons.”

More information about her work is available on her website, and you can follow her on Twitter.

Related