Find us on these Podcast Apps

Listen on Google Play Music
Art, Race & Artificial Intelligence

Art, Race & Artificial Intelligence

We speak with Stephanie Dinkins about AI, or artificial intelligence, and algorithms. She is an artist who creates platforms to dialog about AI as it intersects with race, gender, aging, and our future histories. She is also an Assistant Professor at Stony Brook University.

Stephanie’s Go-To Artist: Kendrick Lamar

Go-To Podcast: On Being with Krista Tippet

StephanieDinkins.com

Email: DinkinsStudio@gmail.com

Twitter (@stephdink)

Instagram


Transcript (Please Excuse Errors)

[Music Intro ♫] 

LaToya [LS]: Hey listeners! Welcome to Abolition Science Radio, we’re your hosts. I’m LaToya Strong-  

Atasi [AD]: And I’m Atasi Das. We’re here to talk all things science and math and their relationship to-  

LS: Colonialism  

AD: Oppression 

LS: Resistance 

AD: Education 

LS: Liberation 

AD: And so much more.  

[ ♫ Music fade out.] 

 

[0:25] 

 

AD: So welcome back to our next episode, which we’re super excited about today. How are you doing, LaToya? 

LS: I..ha LaToya, ha ha ha… 

AD: Just in case you forgot your own name. Ha ha .  

LS: Ha ha, we’re being really formal right now.  

AD: Ha ha.  

LS: Um, I’ve been doing good.  

AD: Yeah? 

LS: Yeah. Yeah, you know, life.  

AD: Life. Ha ha ha.  

LS: Ha ha ha.  

AD: Yes. And it’s fall in New York.  

LS: It’s fall in New York, we’re in Times Square. Times Square -where’s the ‘s’? Times Square?  

LS & AD: Times. 

AD: Times. I think it’s – Times? 

LS: Oh, cause it’s the New York Times. Right? It’s after the newspaper.   

AD: Yes. I think so.  

LS: But I be putting ‘s’s on random things, so sometimes I don’t know.  

[1:03] 

AD: So, this new location.  

LS: Same studio, new location. If you can imagine, gentrification in the South Bronx is so high, that it’s cheaper to have a studio in Times Square.  

AD: Than it is in the Bronx.  

LS: Than it is in the Bronx.  

AD: Which is wild. But here we are.  

LS: Here we are.  

[1:19] 

AD: So, um.  

LS: [In an accent] Here we are.  

(Both laugh) 

AD: That was a New Yorker coming out.  

LS: It was more Boston.  

AD: Oh! Ok. Ha ha.  

LS: No, I don’t know, alright, I’m sorry. Ha.  

AD: We usually do Go-To’s.  

LS: We DO usually do Go-To’s.  

AD: Well. I could think of a Go-To. 

LS: Ok, go.  

AD: Ok, so, my Go-To ask, I’m gonna ask you.  

LS: Ok.  

AD: So, I’m thinking of – it’s fall, it’s gonna be winter. It’s kinda that time, we usually talk about music but I kinda wanna, a little, see: What is your fall/winter, like, Go-To show? Go-To kind of like, if you’re at home, you’re trying to like, not be outside.  

LS: Mhmm.  

AD: Do you have a Go-To kinda, entertainment?  

LS: Atasi, you know damn well I watch the same movies.  

(Both laughing) 

AD: Right, what’s one? 

LS: And TV shows over and over and over and over again.  

AD: Ha ha ha ha.  

LS: Ok, so, if you have Hulu, Creed 2 is on Hulu.  

AD: Oh! 

LS: I will watch Creed and Creed 2 over and over.  

AD: You really have watched it over and over! 

LS: I do, ha ha ha!  

AD: Wah. 

LS: I also really watched Money Talks. I’ve seen it a million times, and I’m just sitting there saying the lines with the movie.  

AD: Ha ha ha ha. I like it. 

LS: Uh ha.  

AD: I haven’t watched any of them – I haven’t watched either.  

LS: You haven’t watched Creed or Creed 2?  

AD: No, I don’t like action movies. I don’t, I don’t –  

LS: I wouldn’t call it – it’s not an action movie. It’s more like…what is Creed? What is - I don’t – drama?  

AD: Ha ha ha ha. Drama with punching?  

LS: Yeah. Oh, that fight scene in Creed 2, the first one. Oh!  

AD: Ha ha ha ha.  

LS: Oh, it gets me every single time.  

AD: Every single time, huh?  LS: Every single time.  

AD: Ok…Am I sharing one? 

LS: You’re – yeah.  

AD: Ok. Ha ha, is that how that works? What I’ve been watching? So, I like, do the series shows that I can watch many episodes – what they call the binge watch. And someone shared with me one called Schitt’s Creek.  

LS: Mhmm.  

AD: Which is a 20 minute like, comedy situation. And it’s funny. It’s unassuming. Like, at first you’re like, ‘ew who are these characters?’ And then you kinda watch three episodes in cause they’re each 20 minutes, and you’re – it’s totally endearing. It’s like this rich family that loses all their money cause they’re, you know, whatever, something happens and then they move to this small town and kind of like…they’re like, what’s happening? Whatever. It’s not a likely show that I thought I would have been into but, I’ve been binge watching that. It’s hilarious.  

LS: How many seasons? AD: I think it’s six. 

LS: Oh, it’s six seasons? 

AD: Yeah. I’m on four, so. I haven’t finished it. 

LS: Oh. It has the right amount of seasons for me but doesn’t sound like something I’d watch though.  

AD: Yeah, it’s – I would not have figured. I like comedy genre, I like drama too. Yes. So that’s my Go-To.  

LS: Go-To.  

AD: At the moment.  

LS: Ok, I like it.  

AD: Yeah.  

[4:05] 

AD: So, but this week, or this episode… 

LS: This episode, yes? 

AD: We are having a really interesting conversation on AI.  

LS: Yes. We’re talking about Artificial Intelligence (AI), and algorithms.  

AD: Yes.  

LS: Do you –  

AD: This word. This word algorithms. Ha ha ha.  

LS: Ok, I just made up a joke, do you wanna hear it?  

AD: Yeah! Oh – you made up a joke?! Yes, I’m ready. 

LS: Ha ha ha. I’m sorry. Ok. Do you think Black people who make algorithms call them algo-rhythms?  

(Both laughing) 

AD: Wow. Wow. 

LS: Ha ha ha ha ha.  

AD: We’re gonna keep this, in the cut.  

LS: Ha ha ha ha.  

AD: Ha ha ha ha.  

LS: Don’t listen to her Emmy, cut it out.  

AD: Ha ha ha ha.  

LS: I guess we can keep it. That’s a good one.  

AD: That’s a good one. I liked it.  

LS: I don’t normally make jokes. I think it’s my first joke that I’ve made. Ha ha ha ha.  

AD: I really appreciate that. I think that’s also what I really – yeah.  

LS: Ha ha ha ha.  

AD: That was pretty good.  

LS: Thank you.  

AD: I was like, what’s she about to say? Alright, alright. So, yes. We’re talking about artificial intelligence and algorithms.  

LS: Algo-rhythms.  

AD: Algo-rhythms.  

LS: [unclear] 

(Ha ha ha.) 

AD: Yeah.  

LS: Who are we talking to about this?  

AD: So, we have a special guest, of course.  

LS: Yay, a special guest! 

AD: Um, and her name is Professor, or Dr. Stephanie Dinkins. And so, Stephanie Dinkins is a transmedia artist. And she’s creating these different platforms to dialogue about artificial intelligence and how it intersects with race, gender, aging, and this thing called ‘our future histories.’  

And, so, she is currently an Associate Professor of Art at Stonybrook University. And, like I said previously, she’s an artist, so she does shows. She does different types of work in the community, um, talking about AI and the world we live in.  

[5:57] 

LS: Mhmm.  

AD: Yeah. So… 

LS: Yeah, and we forefront her work as an artist more than her work as an educator which is interesting.  

AD: Yeah.  

LS: I think I – we took up a lot of her time. And I did want to circle back but I also did not want to keep taking up her time.  

AD: Her time, yeah. This is gonna be a great conversation and uh, I think there’s so much to chew on. There’s so much to learn. Depending on where we’re entering this conversation, I think for both of us, there’s a lot of words and language around like, algorithms and AI and the development.  

LS: Yeah.  

AD: That, it’s not in our vocabulary.  

LS: Yeah.  

AD: We just don’t know what that means, and so, you kind of hear a little bit about that.  

LS: Yeah, alright let’s do it.  

AD: Great. 

[6:43] 

 

LS: Stephanie, thank you so much for joining us. It’s a missed opportunity cause you’re in Manhattan, and we’re in Manhattan, but we’re not in the same place. But we’re gonna get it done anyway.  

Stephanie Dinkins (SD): Thanks for having me.  

LS & AD: Yeah! 

LS: Ha ha.  

AD: We appreciate you being on. We always ask our guests that come on, if they could, just kind of share with the listeners, our devoted listeners, ha ha (LS: Ha ha), a little bit about yourself and if there is a song or an artist that you’re currently listening to?  

[7:11] 

SD: Oh wow. Ok, that’s a good one. So, well thank you for having me. My name is Stephanie Dinkins, I am an artist who is looking at artificial intelligence through the lenses of race, gender, aging, and what I call ‘our future history.’ I kind of fell into this space. I’m not technically a computer scientist or a technologist, I am an artist. I’m also a Professor at Stony Brook University, who teaches digital media and emergent technologies as well.  

As for a song, that’s a – that’s a really tough question at the moment. I feel like I haven’t been listening to that much music. Um, but still and always.  

AD: Are there podcasts or any other things that you’re kinda like, into at the moment maybe? Taking in? 

SD: Yeah, so, I’m always into Kendrick. And I really – um, Kendrick Lamar. I really like what he does and then like, kind of taking apart what he’s saying and who he’s talking to. And I’m always digging on “On Being,” which is a podcast put out by Krista Tippett. Um, and she talks to a variety of thinkers. It used to be from a kind of faith-based lens, but now it’s a little bit more open and really inquisitive. And I don’t know, I find it really enriching and a good place to work from a lot.  

[8:37] 

AD: Cool. Thank you.  

LS: Yeah. So, very first question is: What is AI? Or Artificial intelligence?  

SD: Wow. So, AI is um, I like to call it Artificially Intelligent Systems. Are systems that are able to (a) run algorithms and act autonomously. Um, not a lot of what we have as AI right now is actually autonomous, but it’s working toward that. So it’s systems that do things in ways that humans might be able to do them, or complete tasks.  

I always think of AI as things that try to function in ways that are quote unquote “intelligent” and are often working to do things in ways that autonomous beings with agency would do in the long run.  

[9:30] 

AD: Ok, great. Thank you for kind of putting us – it’s interesting that you also kind of situated AI and then gave us this other term which I haven’t really thought about, but Artificially Intelligent Systems. So, appreciate that. And in your definition right now, you just mentioned how – that there’s this thing about the system acting autonomously or doing things, completing tasks. 

SD: Mhmm.  

AD: And so this is kind of related to our next question of like, what are algorithms and how are they used to develop AI?  

SD: So, algorithms are a set of steps, right. So, in computer science, it’s a set of steps that would allow a computer to [be] something that are programmed by someone. And then said, historical data to process information. Um, and you can think of maybe AI as a series or collections of algorithms that work in concert to process data and information. How’s that work? 

AD: Yeah, that sounds great. And it’s interesting cause like, you know when I – there’s a lot of pop culture, there’s a lot of different references, um, you know, images that come to my mind when I think of AI. Maybe many other listeners also have these images that come up from movies like Wall-E to um… 

LS: I, Robot.  

AD: Yeah, ha. I, Robot, or Westworld. Or even maybe like, things on our phone, like we have these transcription services that kind of like, take your audio and then transcribe them immediately. And so, kinda like considering the way that you’re talking about, you know like – or situating our understanding of AI, what can you tell us about some of the history of the development of AI? And you know, maybe how that even relates to like, our notions of what I think is kinda like common day notions of AI of being like these robots that will take over. Or something like that, ha ha. Yeah, what – can you tell us a little bit about the history around the development of AI? [11:27] 

SD: Yeah. Y’all are asking the hard questions today. So, I’ll start here in the space of -it’s interesting that I don’t speak about algorithms and AI in the popular imagination way that it’s presented like you think they’re gonna take over. And, kill us all. Or, take our jobs. Because I feel like, (a) we already live in an algorithm state that is doing a lot of small tasks that are influencing our lives in the way that we wish. And (b) because, it feels very much like the artificially intelligent world is upon us and growing bigger, right. And more influential in our lives on a daily basis and in really big meta ways. And so, to think we go, they’re gonna take over our lives or they’re taking our jobs, seems to me a backwards way to think because we need to figure out how to deal with these agents and algorithms and things that are in our lives already. Some of which are really helpful like transcription apps, um which are pretty good right. Because we all know if you’ve done a transcription app –  

AD: Ha, yes.  

SD: - or used a transcription service, that they are not perfect at all.  

AD: Absolutely.  

SD: But we’re gonna have to deal with them. And so figuring out how to deal with them and how to influence the way that they interact with our world, and how they include um, really everyone. And I think mostly about folks of color and like, my neighbors. Like, how are we impacted by algorithms and what are they going to be doing in our lives? And what can we do to make sure that we are not like, re-victims of history? Because if you’re feeding data to algorithms that’s historic that’s already full of biases, of people acting on a very different way of looking at the world – how does that impact us? And if people are not starting to address, well, we’re feeding algorithms and AI systems data, is there something we can be doing to make sure that that data is not just wholesale picked up and put into a new system and embedded in that system?  

So that we have historic records of arrests from say, 1900 to now, say, and if you think about what that might mean for Black folks who could have looked at someone the wrong way and been arrested.  

LS & AD: Mhmm.  

SD: How does that impact who and what we are and how we get to live in the world? So that’s how I’m trying to think of, um, the AI space from a position of optimism in a way. From a position of: there’s got to be something we can do and a position of: how do we participate? When I say something we can do, I’m thinking about, maybe it’s gaming the system, maybe it’s redesigning the system,  maybe it’s working with the data so that at least the data that is used is not simply being recycled with all this bad information that I’m talking about.  

AD: Mhmm.  

SD: In terms of history, I’m not that great on history, right. I will say that this is kind of the second coming, or maybe even the third coming of AI Systems. Where, we’re trying to make machines that do things that people, that do things to aid people and that people might be doing. You know, we’ve for a long time had people making systems that try to mimic the way we think or act in the world. For example, there was a mechanical [purse] that was playing chess that was supposed to be this kind of mechanized machine to play chess. However, there was someone inside that machine helping it.  

AD: Mmm, mhmm.  

SD: To the ends that they were designing. And that, that goes back to what’s going on to the systems that we use now, right. So we have AI in our phones like Siri and GoogleHome. We have systems that are all around us but they’re still often being helped and trained by, you know, folks who might make sure that the system is operating correctly, answering in a way that feels responsive to humans. And if you think about that, you can think of even the Amazon. So we’ll go to the Amazon now, what do they call themselves? But Amazon Mechanical Turks, where people are checking on what the systems are offering people, or checking if the answers are correct to guide the system behind the scenes.  

[16:12] 

So, it’s an interesting way to think of the way that we’re developing. These platforms that feel seamless but still have a long way to go until they actually are integrated in our lives and acting in more and more autonomous ways. And then we have to think about how autonomous we want these systems to be anyway.  

[16:34] 

LS: Yeah, everything that you just said, I wanna go back to, when you’re talking about AI as a collection of algorithms. So you touched on it loosely when you talked about sort of, we want to make sure that we’re not using the data that makes people like, re-victims of history. So, where do these algorithms come from? If they’re new algorithms that are being created, are there checks in place? So like, are you – how do we check for the biases in algorithms? 

[16:58] 

SD: Well, it’s really hard, right. So, algorithms are written by people. 

LS: Mhmm.  

SD: Often mathematicians. Right, so they’re written by a small subsection of the sum of us, really. Right, and usually when they’re written, people – I talk to people who talk about the math of algorithms, and the beauty in that. In that, if we have systems that we’re creating and they’re mathematically based, and they’re just running information – how could the math be wrong? 

LS & AD: Mhmm. 

SD: How could it be biased?  

AD & LS: Right.  

SD: Which is problematic because, perhaps the math is not quote unquote “wrong”, but the data that is fed to that algorithm is not clean. It has biases put inside of it. Perhaps the algorithm that was written does not have a set of checks and balances to try to figure out if, you know, something is heavily weighted in one direction or another. And why it might be heavily um, weighted in one direction or another.  

[18:03] 

Perhaps the algorithm is relying on some kind of information that has markers of socioeconomic data that could be inferred. So, for instance, zip codes can be used to kind of identify who and what might be there.  

[18:22] 

LS: Mhmm.  

SD: Right. And depending on how you use that information, you could use it to say, well people in this zip code are less reliable payers of credit cards for example.  

AD: Mhmm.  

SD: And the question is why and how did that correlation come together? And are they doing any checks and balances to make sure that it’s not a false correlation? Are we, right, able to look at the information that comes from an algorithm and see how conclusions were arrived at? And often the answer to that is no.  

LS: Mhmm.  

SD: There’s no recourse for people to go back and look in. Um, can we then question that information?  

So the question becomes like, how is the algorithm handling data? Is it aware, um, are the programmers aware of what they’re doing and how their biases are showing up in the data? Or how just the information they used to prepare their algorithm is showing up? For example, if you’re looking at ideas of images, and identifying people’s facial recognition – what happens if you train uh, computer vision algorithm, right, system, using algorithms on data that doesn’t include many Black and Brown folks?  

LS: Mhmm. 

SD: What generally happens is then, and it’s been proven that, the algorithms don’t do a good job of identifying Black and Brown folk. Which becomes problematic in many ways because, you know, I always think of the idea of – not only, ok so, this camera system does not recognize me, but oh what happens when a camera system is in self-driving cars and driving them, and the cameras are not recognizing people with dark skin very well? That puts them in more danger.  

[20:19] 

LS: Wow.  

SD: Right? 

LS: Mhmm.  

SD: And thinking about where that system went wrong. Is it about what the algorithm is doing? Is it about how the algorithm is trained and what information it was trained on? And I’m really thinking about this a lot, the idea of datasets and what they’re comprised of. And, it’s interesting cause in my work I use datasets and, generally I have to step away from these commonly accepted datasets because they’re based on information or images that either are not complete enough or I feel have some kind of biases embedded in them. So, even something as innocuous as the Cornell Movie Dataset that people use for natural language processing a lot. I think of that, oh, I think about movies, question movies, and how they portray Blackness and Black people.  

LS: Mhmm.  

SD: For the most part, I have a problem with that. Right, it’s not gonna be language that I understand intrinsically in terms of the culture that I know. It’s presenting a very specific view of language, but that’s the language that is used in this database. Which many many many people rely on. Train their speaking agents. And you can extrapolate that back to images, you can extrapolate that to many different things.  

I think one of the ways in which we start dealing with this stuff is (a) thinking about it, right. Thinking about why something is identified in one way as opposed to another. And then, thinking about if you come up against something that seems wrong in a system, what do you do about it? Do you let it go? Do you report it? Do you call it out? Right? Do you ask to have some recourse over those algorithms? Like, the idea of trying to figure out where you step in to at least make known that there’s a problem. Well young Black folks were googling the word gorilla in an image search. And what came up, or what was included in the results were Black kids, right, some young Black people. Which is horrible.  

AD: Yeah.  

SD: I always think of the shock of that. In the late 2000 teens. But because they were computer scientists, it was one of the more – they tweeted right out at Google. Hey Google, what the heck is this?  

AD: Yeah, good.  

[22:55] 

SD: What are you gonna do about it? And I think that often we think there’s no way to do things like that, but on the other hand, if we don’t do things like that, we leave the system as is. And even still, calling things out. And then, how do you make an alternative? Like, my whole practice has become around trying to find alternatives or models for the systems or the way they work. Like trying to work with things like small data.  

So, I’m working on a project called ‘Not the Only one’. Which is trying to be, and I say trying to be because it’s very experimental, a memoir of my family as told through an AI. And it’s made from oral histories between family members. Which means it’s almost no data in comparison to the amounts of data that these systems usually want to work well.  

AD: Mhmm.  

SD: And the question becomes, well how do we use small data? Community data? Data from families? For them to be able to create things that are feeling much more culturally akin to who they are. Um, holding their information as an archive that is complete and fully functional and viable. And that’s been a question that, like, has been received, like when I talk about that, at first people are like, oh well there’s – no, that’s not how it works. But then over time we can start hearing people talking more about, well how do we make small community data? And use it and make that viable? It’s one way of creating a model that starts to lead toward another way of doing things.  

[24:31] 

AD: You just provided so much rich information, so I really wanna say thank you for…  

LS: Yeah.  

AD: Laying so much out.  

LS: Ha.  

SD: Oh yeah, sorry.  

AD: No it’s, it was really helpful to think through and I kind of wanted to, ask if you could help kind of fill in, a little more on – you know, you talked about this is like a second coming, or a second wave of this AI being – or third – ha, third wave of AI coming to a fruition, or being developed in such an accelerated manner. And then you also talked about the systems that it’s based off of. The, you know this, and so I think of like, infrastructure or like, webs, of like, how things work and then datasets. And so, in my mind I’m like, ok, so I’m thinking of like, what does that mean for folks that want to think of a different kind of world. Right? That wanna like change the world in which we live in cause they see it as not ok.  

SD: Mhmm.  

AD: Who is a part of kind of, developing those systems at the moment? Who funds or supports those things? The creation of the datasets that you’re talking about, the large ones, not the community ones as of yet – who’s supporting that or who’s developing that? Who’s in that scene at this time?  

[25:49] 

SD: Ok, so, that scene is a very interesting scene and that still is very much Silicon Valley or researchers around the world in kind of elite groupings doing this work. And really, if you think about it, it’s mostly white men who are doing work around algorithms and AI and systems that impact all of us. Which to me, is a very frightening thought. Like, to me, that’s the most frightening thought, because we have a small subset of people who are now making these algorithms, systems that impact almost everyone on the planet.  

[26:28] 

AD: Right.  

SD: What does that mean and then how do we impact it?  

AD: Hmm.  

SD: There are folks in like Africa, working on algorithms in different systems like, they’re different hubs of, I believe one is in Ethiopia, South Africa, there are hubs of people working on AI. The question is how are they working on it?  

LS & AD: Mhmm.  

SD: One of the things that I find super fascinating about the field is, in this go round, it’s new. It’s still very new and open. The space in which, if you are willing to put your head down and do some crazy work. Because I’m not gonna say it’s easy, it’s hard, like, everything I’ve been doing with AI, I’ve had to learn from scratch. I collaborate with other people. You know, I’m scratching and clawing to pick up this coding so I can do things that feel good and supportive of communities that I’m interested in supporting. Which really echoes out or dopplars out into all of us in the long run.  

AD: Mhmm.  

SD: But, there are openings. There are ways to do the grind and make things happen. And make headway in systems that are here.  

An example is – and I’m gonna go back to the work that I’m doing as an artist. I’m working in natural language processing, which is about, talking kind of chat, but I’m trying to do it from a deep learning perspective, which means it’s something that we are not supervising a lot, it’s trying to come up with it’s own information. I find myself wanting my own agent to have more context to speak from, to be able to have multi-turn conversations. So, like, if you and I were speaking and we could have three or four, five sentence back and forth. And to be emotionally imbued. These are questions that I, as a kind of artist, stumbling in this field to make something that I feel is important to me and my community, um, has come upon. But it’s also something I’ve been told repeatedly is what the world’s top researchers are working on.  

And how does an artist get to be working in some of the same places where these researchers that have been doing this forever are working?  

LS: Mhmm.  

[28:55]  

SD: Which says that, hey there’s a lot of space and opportunity here for many people to start trying to figure out what’s going in this field. The other thing that I’ve come up against is this idea of, this technology as not being for us. Like I said earlier, people who are mostly making the algorithms are mathematicians, computer scientists, making this stuff. And when you start to work with the code and information, the first thing you come up against is the idea of math.  

AD: Right.  

SD: And knowing the algebra that we need to know to do the work. I’m not the greatest mathematician, but what I am is someone who’s super curious and stubborn. Who can to this through the back door, not through the front door. So I’m not trying to build from the math, I’m really trying to reverse engineer things to understand how they’re working and then tweak them and change them.  

I’m really looking at databases and seeing where they seem deficient, and going well – how do I change that database so it reflects the world that I wanna be in better.  

LS: Mhmm.  

AD: Mm.  

SD: What that means is there’s a lot of work because at almost every turn, when I wanna do a project, it becomes ‘oh no, I need to do something with a database first.’ 

AD: Right.  

SD: But, because I’m invested enough in the projects that I’m trying to do, I’m willing to do the work. And one of the things that I keep thinking about is if we can get people to understand a little bit about how these systems are working, input/output, what goes on and start playing with them. And start playing with them in a field or idea base that they’re invested in, then maybe we can get them to dig deeper and start, kind of pushing the technology.  

I always think about how good folks of color are at rejiggering things, and making them do what they want to do. And the thing is about, getting into a system, figuring out how it works so that you can do that. You know, I’ve been thinking a lot, or talking to folks about the idea of two turn tables and a microphone.  

(Ha, mhmm.) 

[31:10] 

SD: Right? Hip hop. That was not meant to do that. That was not how we envisioned what a turntable was or what scratching was, but people took it and started playing with it and finding what was possible.   

AD: Yeah. That’s (…) 

SD: And that’s where I think like, intervention comes in. And play. Especially at a point in the technology where play can be really valuable and there’s still so so so so much to learn and do. It’s not hyper-codified. Everybody is feeling around in the jar. Some people have some math basis’s that they can use to do things a little quicker, but that’s ok. There are ways, right.  

[31:50] 

LS: Yeah, so. Thank you, again. So, you mentioned – I’m like stuck on this thing of algorithms. Like the algorithms are written by people and the datasets already exist so, you have to be willing and able to do the work to look at the dataset or the algorithm before you put it into practice. Like, before you do it.  

LS & AD: Ha ha ha.  

LS: I’m not sure the exact word – how you go from algorithm to AI.  

AD: Ha ha ha.  

LS: I mean, I feel like there’s an additional layer of work where like the person who’s doing the work also has to do work on themselves, cause how do you know what to look for?  

[32:23] 

SD: It’s – so, I’ve been thinking a lot about this too.  

LS: Mhmm..  

SD: You do have to do work because I feel like there’s a lot of times when Black people, Brown people, lower income people, have been told well, this is not for you.  

LS: Mhmm.  

SD: Or, this is too hard for you. And just getting over that hurdle of being beyond the idea of, ‘I can do this’ or at least ‘I can take it apart.’ Right. 

AD: Mhmm.  

SD: I think it is self work. Like, I’ve been thinking a lot lately about the idea of what I am referring to as Afro-Nowism. Which is about doing in the now, and figuring out how to free your brain and thinking beyond the idea of always being in opposition to something or the things that are in the way. And trying to just get your brain to a point where you go, what magnificent could I make if I wasn’t concerned with all those outside forces - all the time that are distracting me? And I understand that like, for some people, that’s an impossibility. But I think we can, for five minutes at a time, ten minutes at a time, suspend disbelief and just start working on the things we really really wanna see. And then start making them.  

[33:45] 

LS: Yeah.  

SD: And I think that is self work.  

LS: Mhmm.  

SD: That’s about how do we get beyond the crap that were said all the time. These ideas that are kind of against working through things, and working with things that are seemingly far flung and outside of ourselves, our culture, our interests. I do think that’s a lot of self work.  

LS: Mhmm.  

SD: But I also think that it can be incremental and you try it. So I always give the example of, you know, not too long ago, or awhile ago, I was at this artist residency, and an artist residency, they invite artists to hang out in a place and work, do their work there. And this one happened to be in this beautiful Mission house, it had a chef, it had people who are helping others, and you know, you go and you have the run of the place. But you have to be able to take that as well. So a friend of mine who’s a young white guy was there, and it was really interesting to watch him walk around this place and the grounds. Because he walked around like he owned it.  

LS: Mhmm.  

SD: Go through the kitchen, do what he needed to do, just walked around like he owned it. So, I said to myself, well what would happen if I did that? If I just decided that the normal rules that I’m often running through my head, even subconsciously are not in play. I’m gonna walk around this place like it’s mine and I belong to this [superpolice] and I can do whatever it is I need to do. And what was interesting was nothing happened. I did not, nothing happened. Whereas in my mind, something might happen if I do x, y, or z.  

I’m like, doing tiny steps of pushing boundaries to let your mind kind of, move beyond the rules we’ve been given. Which are different for all of us. To act in a way that allows you to more fully realize, kind of the potentiality of your thought, I think is amazing self work.  

LS: Mhmm.  

SD: And yes it’s there, and yes it’s a burden. But I also think it’s necessary.  

[35:58] 

LS: Definitely.  

SD: And worth it.  

LS: Yeah, definitely a necessary burden. So the first shift I see that your work is calling for is one like, computer science and algorithms and math – these things are not objective. Um, they are not unbiased. And so, the field needs to be more embedded, they need – so you have to take a class on race and gender and all these other things so you can understand how the world works and how the dataset that you might use was embedded in that world. So that the algorithm might do something different if you can then catch these things. But you can’t catch them if the way the world operates for you is fantastic, but you know, mass incarceration is happening to all these other people.  

SD: Yes.  

LS: And so I feel like the other shift you’re calling for is almost that maybe we don’t even have to start with a database or algorithm. It can emerge from the people that you’re trying to do work with. So maybe we start with this community and so, if we’re in this community then you can’t necessarily use this AI in another community because you have to start from that community as well. So I think it’s just really powerful that these shifts I see that you’re calling for.  

[37:07] 

SD: Yeah, it’s interesting cause I think of it as venn diagrams.  

LS: Mhmm.  

SD: In that, if we do the work in different communities and not just stay in those communities, but come together so the places that we overlap make us stronger, I can’t think of a better outcome because then you strengthen the community as you strengthen the whole. And I think that’s super important. So, kind of doing the work of figuring out how communities thrive in this AI world, this new AI reality. Right? 

LS: Yeah.  

SD: It’s gonna bring about a lot of changes and has already brought about a lot of changes. Like, I think a lot of the things that people are angry about these days are because technologies have come along that make what they do as a job less viable. People are much more precariously positioned. And the question is, how do we as humans thrive in a technological landscape that can do a lot of the things that we do. Maybe that’s our creativity. Maybe that is us having a lot of leisure time. But it’s gonna push us beyond this idea of just having to survive. 

LS: Mhmm.  

SD: And having to find other ways to engage with ourselves and each other. Which I always think is about always learning.  

LS: Mhmm.  

SD: Like I think that we are in, entering an age in which, we don’t get to sit on our laurels. Right, we don’t get to sit on our laurels, we have to constantly be learning um, new ways or being and interacting with the world because the world is gonna shift exponentially, um constantly, and we see that already, right. And so, that’s definitely one thing I’m calling for like, how do we strengthen communities to strengthen the whole? Recognizing that these technologies are coming down the pipe. And they are going to unsettle a lot of things in our lives.  

[39:17] 

And then, calling out the other one that you were talking about, the idea of, looking at the things that we’re already doing and calling out this idea of biases and not being blind to what we’re doing. Not saying that the technology will handle it (…) superior and just accepting it. But, figuring out (a) what the heck is the world we wanna live in? Like, as a community, a world community, what do we want that world to look like? And what do we have to do in terms of the ways the algorithms work, the way the systems come together, the way data is used to make that happen?  

And I think that’s a really hard ask for us to do the hard work of, well maybe we don’t just take all the census data blanket from, you know, the late 1800s until now. Maybe we have to figure out ways to parse that information, or weight it, to figure out how it’s functioning. Or how it was, how biased it was in the past, so that we can make a future that is a little more even in the playing field. You have to recognize who and what is already there. In some sense, what we’re doing – if we’re recognizing what is already there in terms of how we’re doing and what we’re doing and what happens to them, I’m thinking right now about what kind of discriminations are built into systems and how we counteract those. Do we really even want to employ algorithmic systems in some settings? In like, medical diagnoses and figuring out what’s going in, being able to crunch information quickly, precisely for an individual is amazing. In hiring, especially if it’s based on models of what a good worker looks like from a historic point of view, perhaps that’s not so good. Because if you take past information, a good worker for the most part across time in the United States looked like a white man.  

AD: Right.  

SD: So how do we start to pull that apart? 

LS: Mhmm.  

SD: And then put it back together in a way that’s supportive.  

AD: Right.  

[41:39] 

AD: There’s so many pieces that I, that is kind of settling in my mind as you’re sharing kind of the ways in which we can engage and like rethink our relationship with, in developing and engaging with artificial intelligence. And so, I wanted to maybe ask you to bring forward some of your thoughts. You’ve already kind of mentioned some of the intersections between how our data specifically, sets are kinda created and race, or racialization. Um, processes of racialization. But, could you say any, I mean I don’t know if you want to speak more to this, of why it’s necessary to think of those intersections between like the systems, the dataset, and race? [42:26] 

SD: Well the way I think about the idea of AI Systems and race, or gender, or disability, or many many other subsections, right, of people, is that really, we’re headed towards an AI mediated world. There seems to be no question about that at this point. And if you’re someone who acts in that world, who has not had a hand in creating it, in influencing the way it functions, in influencing the decisions it makes, what does that world look like for you? How do you thrive in that world? Especially if you’re thinking about past systems that were full of biases. That didn’t recognize your humanity as a person of color. Like, to me, that becomes a fundamental question of how we exist together on the planet. Because, if we’re embedding all these systems with what I’m gonna say, just bad information. Right, historically based, which is the way people say ‘well it’s history!’  

LS: Mhmm.  

SD: ‘Well, it’s based on what we’ve done in the past.’ But if what you’ve done in the past is based on excluding people, right, actively excluding people and you embed that and you code that in a new system, that has great sway over most people. Where do the people who are not helping build that fit? So people who are historically discriminated against, where does that fit? And I don’t think that we can afford to build those things. And I think that people of color in particular cannot afford to sit by, while these systems are developed and these old histories are just popped in there. Because, and usually it’s not done, right, it’s not done maliciously, it’s done in a way that, ‘oh this is a history.’  

AD: As –  

SD: It’s just an easy way for me to test this.  

AD: Apolitical or something?  

SD: And that is not acceptable.  

AD: Right.  

SD: Right. And I think that even when we ask other folks to kind of look at these things, and look at these systems, it’s hard to get buy-in because they don’t even see it unless someone helps them see it a lot of times. Um, commerce and money come into play. So, what do we need to do to make something, put it out in the world, put it out quickly, and make sure that it’s making money. That [unclear] development, start thinking about what might be wrong in a system or it’s data. So, some people have to be looking at, well how do we dissect this and make this better? And how do we hold your feet, who are making this right now, to the fire so that you’re looking at these questions? And that you’re bringing in people who can actually see them. Because a lot of times, it’s just like, [why fund this], you just don’t even have access to the information. I like to think a lot about the idea that, even President Obama could be passed up by a cab on the streets of NYC.  

LS: Mhmm.  SD: And people wouldn’t necessarily believe that, right. But it’s like, what are they seeing at the moment? A president or a Black man with all the. history that comes with that? And how to we start to account for that history and shift it? 

[46:04] 

AD: Thank you for sharing your thoughts on this. I had one other follow up question. So you talked a little bit also about some places where alforithsm are used. Where, you talked about, you know, between medical situations with the doctor and symptoms and then also in hiring, and so it made me think of like, are there – in thinking of like, are some situations maybe algorithms should not be employed, especially with the systems in which they’re built with. So my question to you is are – do you think that there are particular things that can’t be made into data? Cause I think of –  

LS: Or shouldn’t be?  AD: Or shouldn’t be.  

LS: Yeah.  

AD: Couldn’t or shouldn’t, yeah.  

LS: Mhmm.  

AD: Be made into datasets, meaning that, when I think of datasets, they’re um, a lot of ways of thinking about it but um, some of them are kind of like, you’re creating categories for whatever that you’re looking at and so, are there things that shouldn’t be, you know, and they’re like kind of like, not standardized in a way but they’re like made to be common, or I don’t know if you have a better word for this, ha, LaToya, but, so yeah. I guess my question is are there things that you feel like, um, can’t be made, or shouldn’t be made into datasets? Or datafied – I get, I’m using the word datafied.  

LS: Or into an algorithm. Or.  

AD: Yeah.  

[47:20] 

SD: I’m of two minds on this. I think there are probably plenty of things that we should try to quantify and make into data where you have binary choices, going on. I’m trying to think of a really good example of something. Right, in a way, I’m asking for culture to be embedded into algorithms as data, right. For a kind of cultural specificity, and at the same time, I wonder if that’s even possible.  

LS: Mhmm.  

AD: Yeah.  

SD: But I also think that, as we watch technologies and algorithms and data come along and homogenize us so that we can pick the right boxes, and kind of, you know, funnel us down into less and less and less unique beings, how do we counteract that so that we keep some of what makes us, kind of, weird, you know, soup, and our differences stay intact? Not because they separate us, but because they make us things that are interesting and nuanced and beautiful. Right?  

AD: Right.  

SD: And I think a lot about the idea of, well sure there are lots of things that shouldn’t be turned into data, none of which are coming to mind right now.  

AD: Ha ha, that’s ok.  

SD: But, also, what won’t be turned into data? Because I think they’re gonna try to turn everything into data.  

LS: Mhmm. Mm. 

SD: More and more things become data. And if that’s the case, how do we deal with it. How do we keep a sense, of not, five or six squares that we all fit into for lack of a better analogy. I don’t know, what do you think shouldn’t be made into data? [49:08] 

AD: So, I’m working on my dissertation stuff, and I’ve been thinking about, you know, numeracy, you know, ways in which things are numbered, how do we learn about numbers, and I’ve had this really interesting conversations with some educators, who kind of like, point to places where their students or in situations of learning, like. So, for example in music, um, you know the ways in which students engage with like, creating music, um, learning about music, like, or creating beats to like a cipher and how that builds upon one another is like, it may not be replicable, you know there’s these instances where an educator kind of shares where their kindergarten class does this like amazing, like array of different rhythms and beats and that was captured – she was able to hear it, and like experience it and so were those students and it was awesome. And to me, I feel like that was like a moment in time, that could happen other times but that experience will not happen again. So, in terms of like, concrete-ing, making those beats like then learn like, written down in a way like – ‘ok, remember you did this! And you used this sound. And you did this and you did this sound.’ It like lost kind of, it would lose like, what happened int hat particular experience with those group of students, that it was like – it happened in the right day and the right moment and the right vibe that it was happening in the, you know, whatever – whatever was in that room. And so I think like, could that be something that’s datafied? I just think of like things like that. The performance I guess, in a way. Um, that, can’t happen again in exactly the same way that it happened before.  

LS: Mhmm.  

[50:56] 

SD: Right. Like experience, you’re talking about. 

AD: Yeah, kind of. Right, like I can’t quite relate to you what my experience or someone else’s experiences – I can share, but it will only go so far because you weren’t there or like, I’m a different person from that point onwards.  

SD: I think that that is a lot about the intangible, and how you account for things that are intangible in the way we experience the world. And wanting to hold on to those and wondering how we do it.  

AD: Right.  

SD: I’m imagining now that the chorus of synthesized voices that, kind of re-performs the thing that the kids in the class did. I’m wondering if it could ever get as close, right? In feeling.  

AD: Right.  

SD: In depth, which is just so fascinating to think about. Cause I think we won’t get to a time when, you know, music analysis and coded. When we’re able to somehow, through synthesized voice and recorded actions, or recorded timbres, recreate that. Whether it has the spark of the soul, or nuance, or intangible in it, that becomes a different question. But you know, the idea of being able to tell what is what, or what’s alive and what’s not, what’s a recording and what’s not, is getting thinner and thinner. Or what somebody said and what they never said because a synthesized voice said it for them? 

AD: Yeah, those technologies.  

SD: I, yeah, like the technologies are around to make some spaces for it, I don’t know if they make the substance for it.  

[52:45] 

LS: Yeah, I do, I’m mindful of how much of your time you’ve already given us. One additional question, maybe we’ll have more if you’re willing to stay (ha ha) I’m sorry. To the beginning – one of the very first things you said is that you’re an artist looking at AI through race, gender, aging and future um, histories, and I just wanted to talk about that future histories part. I know you mentioned before that, you know, with the way that AI is being done, like, we don’t want folks to be re-victims of history. So I’m wondering if you could just talk about, in lieu of, like some folks are already becoming re-victims of history, thinking about how technology is being used at the border, um, fingerprinting technology, like all – ancestry testing like being used to all these things. And so, what are some ways that you see in this future history of AI working for or with Black or indigenous and communities of color, versus how we see it unfolding? 

SD: The technology is being really problematic. In that, like you were saying, at the border, biometrics, um, DNA, um, recidivism, using algorithms to decide recidivism. There’s so many ways in which these technologies are already employed to hinder communities of color. And I get that. And I think it’s problematic. There are ways in which we can refuse, right, like we can refuse the technology or ask the technology to be legislated out of use, as in California, right. So, facial recognition is not supposed to be used in San Francisco, on an official level.  

LS: Oh, I didn’t know that.  

SD: But you can already see that, there’s gonna be a whole subset of camera systems that are able to be accessed. 

AD: On the market, right.  

SD: So, then the question becomes to me, you know, I would love to, if we could, stop [unclear]  

LS: Mhmm. 

SD: But in the event that we can’t, what do we do to deal with those things?  

AD: Yeah.  

SD: Is it legislation? Is it policy? Is it community access? 

LS: Mhmm.  

SD: Do we get together and rally folks to make versions of the systems that work differently? Do we call it out so that it can’t go on undetected. So that at least if you’re someone who is in court being adjudicated, your lawyer knows that they’re using this system to think about what your sentence should be and can think through and advocate on your behalf in light of that? Right, for me it’s the ok – ostrich in the sand versus trying to do things that make the system somewhat better. Are there ways to use the technologies to bring about things better? So, protest movements using cell phones, WhatsApp, that’s all algorithms too.  

AD: Right.  

SD: It seems to me that just handing it over to the powers that be, or leaving it in those hands is the worst thing we can do. And trying to engage and use it for what we need it for, in some instances, or knowing that it exists in places that you can question it in other instances is gonna be one of the only ways forward for communities of color. I just can’t see us saying, well you can’t do that because the surveillance cameras are gonna be um, the algorithms to see who gets into schools, who gets a job, how that is decided, are already being used.  

So then, how do we, with that knowledge, work the system?  

[56:36] 

LS: Mhmm.  

AD: Yeah, absolutely.  

SD: Or remix the system. Like, I wish I could see other ways out. But I just don’t see it, and I’ve heard people saying, ‘but we don’t have access’, ‘we don’t have x, y, or z’… 

LS: Mhmm.  

SD: That would allow us to do things. And I think that that can be really true in many instances - but the question is, this is just too important to say, well I just don’t have access, now what do I do? But ok, how will I find access and start playing and trying to change some of those things? And you know, I wanted to say earlier that, the ways in which I’ve been learning these technologies is lessons on the web, finding classes, finding videos, watching five different videos on the same process coming at it from slightly different directions until I understand it.  

LS: Mhmm.  

SD: Going to GitHub because many people are working open source and putting their code up on GitHub.  

AD: Mhmm.  

SD: So that you can go in and use it, and download it and run it and start to understand how it’s working. There are systems online like Open AI, Dialogue Flow, I mean, IBM has a [unclear] online that you can use. You can use the Amazon system, but you can go in and start working with to start to materially understand how the systems are working. And even doing that, like, going to a library, going on DialogueFlow, playing with that for awhile starts to help you really understand how input and output are working. How data becomes so important even when you can’t change the algorithm. Maybe you can change the data.  

AD: Right.  

SD: Um, but like, start understanding on a different level, and I think that’ll be different for everybody. Like, for some people reading the newspaper or five newspaper articles about how these things work might be enough. Where others can go deep.  

AD: Right.  

SD: And start right, just trying to build I think.  

LS: Yeah, thank you so much Stephanie! You shared so – I mean, it’s just a lot that you’ve shared with us. And I think I could sit here and ask you a million more questions, but –  

AD: As could I.  

LS: Ha ha ha.  

AD: Yes.  

LS: And so, we want to end by asking you if there is – if you could just share your, if you have social media, how can folks connect with you? If you have any upcoming projects or shows, how can we come see you? 

SD: Yeah! They can, um, DinkinsStudio@gmail.com will get an email sent. They’ll be lots of shows and things coming up in January. Um, one at the Museum of Contemporary Photography in Chicago, one here in New York at Pratt, I don’t have the details yet, but I would love to share that with folks.  

LS: Yeah, please share.  

SD: What else is going on? There’s a lot going on so it’s hard to even comment. And I’m always happy to talk to folks as time allows, if time is allowing.  

AD: Right. Ha ha.  

SD: Less and less. And they can visit my website StephanieDinkins.com, Instagram, Twitter, all the normal things.  

AD: Ha ha. Great, we’ll definitely put those links up on our website. So, and any of those upcoming dates, we’ll try to track them down and so our listeners can learn from you and follow and engage.  

LS: Yeah, and hopefully we’ll be able to make it to your show at Pratt.  

[1:00:19] 

[♫ Music begins: Kendrick Lamar, Alright] 

AD: So you just heard a little bit from Kendrick Lamar, a song called Alright. Um, the album To Pimp A Butterfly which was released in 2015. Which is interesting. We were just watching the video. 

LS: Mmhm 

AD: As we were watching the song. And uh. It’s like the extended version it had a whole longer storyline. I guess it was putting the song in a context. 

LS: And it had a Disney short. Like a short..before..haha 

AD: And you are like, ok? {LS: haha] 

AD: You know what I liked about this video actually? is..so he’s like floating. Or he’s like walking on air as he’s like moving forward [unclear] [LS: mmhm] And it reminded me of that book, The People Shall Fly by Virginia...ahh I’m looking at this children’s book. I’d used to use it with the kids.  

LS: Not where I thought you was going with that 

AD: yeah? 

LS: What is it? The People Shall..

AD: The People Could Fly by Virginia Hamilton American Black Folktales because in that book there’s so many, the imagery of flying or floating is just so present. So anyways, that’s just what. I was like OMG it’s just like that book! It's just like he’s ev..Anyways that just what came up for me as I was watching the video and hearing him. Do you know this? 

LS: I do not know that but, book right. [both: hahaha] [AD: This book right there] But, I like it because I feel like a lot of African American / Black American history gets erased just by the nature of how like the settler state is set up. How like whiteness has continuously consumed and then erased and then put their face on it. Things like swing dance, um, the banjo, folk music, rock, country, we associate with a certain thing, when like the history of it is in the African-American tradition. The same thing with like folktales. There’s a lot of folktales that also came from African-Americans, like Brer Rabbit is one of them. [AD: Yeah.] 

Um, that gets erased. So people are like you don’t even have this. But we do have that but it just got erased. So I’m curious about that book and I'm wondering about it and what the stories are. I’m gonna get that.  

 Alright. I really like Alright. If I’m out and Alright comes on, I will go ham. Just know. [AD: haha] It’s really uplifting song and I think it, it’s a feel-good song. Right? [AD: yeah yeah] It’s a really feel good song.  

AD: And I considering our conversation with Stephanie Dinkins like thinking of AI and thinking of what it means for people of color, what, you know, the possibility of creating artificial intelligence and like that interaction, that you know, I see how this how this song can be kinda like “we’re gonna be alright.” There this like. There’s a lot already happening. There’s already so many things being developed that are even not even in our imagination at this moment, I mean, they are being created and it’s kinda of like wild. If you have a chance, you should check out some of her work and in online. There’s these really beautiful clips of her speaking and she talks about this a little bit. But her speaking with this likeness of a human being but it’s a machine, you know. So anyways, it’s just wild. The things that are being created, the things that are being thought up and who is being put, like whose ideas and whose politics are being embedded in that, in those creations. Whatever, there’s a lot happening and a lot of it is problematic and there’s a lot of people doing different work that are trying to expand it so. So, I feel like this song seems appropriate in that sense of like the work she is trying to do as an artist. So, I appreciate her bringing this in. [LS: yea] So anyways, check out her work. Check out other people’s amazing art things. If you come across other people who are thinking of technology in really different ways, share that with us. Share it with the community. Tweet it to us. Tweet at us. I always get that wrong cuz you can tell I don’t use Twitter. Do you use Twitter? [LS: haha] Do you tweet at people? 

LS: You can at somebody. [AD:hahaha] Or subs/sub tweet them. Yeah, so thank you for listening! 

[♫ Musical outro.] 

AD: Check us out at Abolition Science [dot] org, where you can sign up for our newsletter.  

LS: And follow us on Instagram @abolitionscience and also follow us on Twitter @abolition_sci  

AD: See you soon! 

 

 

 

 

 

Technology, Activism & Abolition

Technology, Activism & Abolition

DNA Technology and Racial Becoming

DNA Technology and Racial Becoming