Redefining Society and Technology Podcast

Book: More than a Glitch, Confronting Race, Gender, and Ability Bias in Tech | Guests Meredith Broussard and Sean Martin | Redefining Society Podcast with Marco Ciappelli

Episode Summary

Meredith Broussard, a data journalism professor at NYU, discusses her latest book "More Than a Glitch" in which she explores the intersection of technology and society. She explains how we need to be more aware of the biases present in computational systems and be willing to protest when unfair decisions are made.

Episode Notes

Guests: 

Meredith Broussard, NYU Associate Professor and data journalist [@nyuniversity]

On Linkedin | https://www.linkedin.com/in/meredithbroussard/

On Twitter | https://twitter.com/merbroussard?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor

On Facebook | https://www.facebook.com/meredithkbroussard

Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]

On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/sean-martin

Host: Marco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society Podcast

On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/marco-ciappelli
_____________________________

This Episode’s Sponsors

BlackCloak 👉 https://itspm.ag/itspbcweb

Bugcrowd 👉 https://itspm.ag/itspbgcweb

Devo 👉 https://itspm.ag/itspdvweb

_____________________________

Episode Introduction

"Meredith Broussard, a data journalism professor at NYU, discusses her latest book "More Than a Glitch" in which she explores the intersection of technology and society. She explains how we need to be more aware of the biases present in computational systems and be willing to protest when unfair decisions are made."

Welcome to the show! 

Today, we have an exciting conversation with Meredith about the intersection of technology and society. Meredith is a data journalism professor at New York University, a researcher, and the author of two books, including her latest, "More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech." 

In this episode, we will explore how the problems of humanity are manifesting inside our computational systems, and why we should not have blind faith in these systems. We will discuss the widespread trust in computational systems, which is rooted in our inability to understand the complexity of the technology. However, with the increasing use of algorithms to make decisions on our behalf, we need to become more aware of what's happening inside these systems. We need to be able to protest when a decision is made that is unfair, biased, or systematically discriminatory.

In "More Than a Glitch," Meredith demystifies AI and explains it in plain language so that people can feel more empowered around computational systems. She argues that we should not think of problems of racism or sexism, or ableism in our computational systems as glitches that are easily solved. Instead, we need to add more nuance to the way we talk about them. This conversation will give you a glimpse into the issues surrounding technology and society that are vital to understanding in today's world.

So, join us as we delve deep into the topic of the intersection of technology and society with Meredith. Share this podcast with your friends and colleagues, and don't forget to subscribe, so you never miss an episode. Let's get started!


About the book

When technology reinforces inequality, it's not just a glitch—it's a signal that we need to redesign our systems to create a more equitable world.

The word “glitch” implies an incidental error, as easy to patch up as it is to identify. But what if racism, sexism, and ableism aren't just bugs in mostly functional machinery—what if they're coded into the system itself? In the vein of heavy hitters such as Safiya Umoja Noble, Cathy O'Neil, and Ruha Benjamin, Meredith Broussard demonstrates in More Than a Glitch how neutrality in tech is a myth and why algorithms need to be held accountable.

Broussard, a data scientist and one of the few Black female researchers in artificial intelligence, masterfully synthesizes concepts from computer science and sociology. She explores a range of examples: from facial recognition technology trained only to recognize lighter skin tones, to mortgage-approval algorithms that encourage discriminatory lending, to the dangerous feedback loops that arise when medical diagnostic algorithms are trained on insufficiently diverse data. Even when such technologies are designed with good intentions, Broussard shows, fallible humans develop programs that can result in devastating consequences.

Broussard argues that the solution isn't to make omnipresent tech more inclusive, but to root out the algorithms that target certain demographics as “other” to begin with. With sweeping implications for fields ranging from jurisprudence to medicine, the ground-breaking insights of More Than a Glitch are essential reading for anyone invested in building a more equitable future.

_____________________________

Resources

The book: https://mitpress.mit.edu/9780262047654/more-than-a-glitch/

____________________________

To see and hear more Redefining Society stories on ITSPmagazine, visit:
https://www.itspmagazine.com/redefining-society-podcast

Watch the webcast version on-demand on YouTube: https://www.youtube.com/playlist?list=PLnYu0psdcllTUoWMGGQHlGVZA575VtGr9

Are you interested in sponsoring an ITSPmagazine Channel?
👉 https://www.itspmagazine.com/advertise-on-itspmagazine-podcast

Episode Transcription

Please note that this transcript was created using AI technology and may contain inaccuracies or deviations from the original audio file. The transcript is provided for informational purposes only and should not be relied upon as a substitute for the original recording as errors may exist. At this time we provide it “as it is” and we hope it can be useful for our audience.

_________________________________________

Show intro00:15

Welcome to the intersection of technology, cybersecurity, and society. Welcome to ITSPmagazine. Let's face it, the future is now we're living in a connected cyber society, and we need to stop ignoring it or pretending that it's not affecting us. Join us as we explore how humanity arrived at this current state of digital reality, and what it means to live amongst so much technology and data. Knowledge is power. Now, more than ever.

 

sponsor message00:47

Black Cloak provides concierge cybersecurity protection to corporate executives, and high net worth individuals to protect against hacking, reputational loss, financial loss, and the impact of a corporate data breach. Learn more at Black cloak.io. Bug crowds award winning platform combines actionable contextual intelligence with the skill and experience of the world's most elite hackers to help leading organizations identify and fix vulnerabilities, protect customers and make the digitally connected world a safer place. Learn more@bugcrowd.com Devo unlocks the full value of machine data for the world's most instrumented enterprises. The Devo data analytics platform addresses the explosion in volume of machine data and the crushing demands of algorithms and automation. Learn more@devo.com.

 

Marco Ciappelli01:53

Year we are redefining society, or at least we are trying. This is my show. But Shawn sneaked in and through the backdoor. It's a glitch in my, in my podcast, I guess

 

Sean Martin  02:09

the algorithm didn't work.

 

Marco Ciappelli02:12

Now now, and I think we're gonna talk a lot about how algorithms are now working properly. But yeah, that was a joke. As Sean, I'm glad that you are joining me for. For these you'll bring the cybersecurity privacy. And, and that technology side of things on this conversation, I'll bring the sociological part. And most importantly, our guest is going to bring both and she brought two books with her actually. So that's what we're going to talk about today. Meredith, welcome to the show.

 

Meredith Broussard02:44

Thanks so much for having me. It's great to be here.

 

Marco Ciappelli02:46

Yeah, very exciting. Actually, this is a conversation that sits right at the intersection of technology and society. And as I said, before we started we didn't know which angle to take it from and then we decided to go with society as as the primary lens to look at this. So you wrote two books. As far as I know, you are also a journalist, and you are a researcher and developing software, I believe around with artificial intelligence. Tell me if I'm wrong about that. But I think I think we're gonna have you introduce yourself. I kind of gave a little hint that but what what, what you do and why you decided to write this books?

 

Meredith Broussard03:36

Oh, well, thanks so much. So I am a data journalism professor at New York University. I'm also the research director at the NYU alliance for Public Interest technology. And I have a new book out which is called more than a glitch confronting race, gender and ability bias in tech. And the big idea behind the book is that we tend to talk about problems of racism or sexism, sexism, or ableism, in our computational system, as glitches as things that are easily solved. And so what I'm arguing is that actually, it's, it's more than that. It's that all of the problems of humanity are manifesting inside our computational systems. So we need to just not have blind faith in computational systems, we need to add more nuance to the way we talked about that.

 

Sean Martin  04:33

So we're where does that faith coming from and exist in being applied to where is it being applied? Because I mean, I can certainly from the outside in we've, we've done a lot of conversations from the technology perspective, saying that there's a lot of abstraction. Just look at our phones, for example, there's so much going on in there and all we see is a case with a screen that we touch right and There's so much going on in there. So I can envision the trust from my perspective as a consumer using a system like that. But I presume there's much more trust going on even the ecosystem and the supply chain and the coding and the managers of the people are doing that. And investors. So I don't know how, how far and wide and deep you want to go with that. But where does trust sit? And where, where maybe is it a little lacking? Well,

 

Meredith Broussard05:28

what I think about is I think about driving my car. And I like to just get in my car and drive and not really think about it. But when I do think about it, I think about okay, all of the parts of the engine are working together, you got your spark plugs, plugs, you get your pistons, you got the, you know, whatever computing is, is controlling whatever is happening in the car, like I I sort of know what's going on on my car, but in terms of diagnosing it, when something goes wrong, I would much rather take it to the mechanic and say, okay, fix it. So when it comes to computational systems, a lot of us are like that a lot of us kind of use our phones without thinking too hard about how it works. And we're kind of vaguely aware of what's happening inside there. But we prefer to just use it. Unfortunately, because we are entering into a world where algorithms are increasingly being used to make decisions on our behalf. As citizens, we need to become more aware of what's happening inside computational systems, so that we can protest when a decision is made, that is unfair, that is that is biased, that is going against certain groups of people systematically, right. In the same way that you know, in order to not get ripped off at the arm mechanic, you need to know the basics of what is happening inside the car. Now, this is really hard, I looked at the computers are difficult. Computers do math, right? There are machines for doing math. And it is not easy to understand what is going on with algorithms with machine learning. So one of the things that I do and more than a glitch is I try and demystify it I try and explain AI in plain language, so that I people can feel more empowered around computational systems.

 

Marco Ciappelli07:32

And as usual, we go into knowledge and understanding. And that's pretty much the answer. It's almost like 42 is the answer to everything. But you said something, like we just trust our, our computers, we just trust our machines, and they're not as easy as, as it used to be in tools before the hammer is broken. Is it? You know, the head of it? Or is it the piece of wood or the metal I can see it. Now everything is you can't see it. I trust the mechanic but the mechanic is going to use software to diagnose the issues, there are more line of code in a car nowadays than my computer. So we there is a kind of feedback loop there that say where, where worries that we can resolve certain things out technology, great when it's by itself, technology, deciding things for us. That's a little bit different. So that's we get the bias, we get a lot of things that we think we can trust the machine. But really we can. So I'm understanding that your new book, it's actually focusing on this. So the problem is, and I'm going to be very, very simplistic here. It's the problem comes up society is not it doesn't come from technology.

 

Meredith Broussard08:56

Right, right. The problem is people, one of the things that people often say about AI is that AI is a mirror AI, it shows us what are the patterns that are already happening in society. And we know that there are problems of structural discrimination. We know that, that there's a history of haves and have nots in America. And so these kinds of patterns are reproduced in computational systems. So one of the things that I write about in the book is an investigation by the markup recently where they looked at mortgage approval algorithms and looked at who was getting approved automatically for a mortgage and who was getting denied. Well, it turns out that nationally loan applicants of color were 40 to 80% more likely to be denied than their white counterparts by these algorithmic systems. And you might look at that and say, Oh, well You know, it's the computer the computer is objective, it's unbiased. Like it must be true if the computer made that decision. And that is a kind of bias itself, that I call techno chauvinism, the idea that the computational solution is superior, right? So if we take away the techno chauvinism, when we look at what's going on with the mortgage approval algorithms, we can see that the mortgage approval algorithms are fed with data about who has gotten mortgages in the past, right? Because this is how machine learning systems work. You take a whole bunch of data, you feed it into the computer, the computer finds that patterns, and then it reproduces those patterns, it makes predictions or judgments about the future. Right? So who has gotten mortgages in the past? Well, again, history of financial discrimination in the United States history of redlining. So the computer is reproducing the financial discriminatory patterns of the past and say, oh, let's give more work mortgages to white people, let's deny black and brown people mortgages, and it's just reproducing this inequality. So if you didn't know that this is a no.

 

Sean Martin  11:20

Sorry, Meredith it does, it doesn't know the color of the people. Does the computer know that?

 

Meredith Broussard11:29

Well, I mean, the computer doesn't really know anything, right? It's just a machine that does math. It's just like a dumb, you know, your phone is actually just a dumb brick. Like we tend to anthropomorphize these things, we tend to attribute agency to them. But it's just it's a, it's a pile of poisonous rocks, right. So the mortgage, the people writing the mortgage approval algorithms, also, I think, are not doing it with malice. Right? I don't think that computer programmers are getting up every day and saying, I want to write code that oppresses people, I think that most computer programmers are like most of the rest of us, just going through the day trying to do good, trying to become better people. But all of us have unconscious bias, right? We're trying to be better people. But we're not there yet. And we can't see our unconscious biases, because they're unconscious, right? If we knew they were there, then would be different story. So when you have homogeneous groups of people creating technology, as you mostly do in Silicon Valley, because Silicon Valley does not have sufficient diversity, you get software that has the collective blind spots of the creators. So that's what's happening in a lot of these situations. We have developers or data scientists creating these systems who do not know about do not know enough about the history of financial discrimination in the United States. They're maybe not aware that zip code is a proxy for race. And so they're building these systems, and they're saying, Oh, look, the computer is making this decision. Isn't the so efficient? Isn't the saving of so much money? Isn't the so fast? Well, it's not actually getting us toward a better world, if we are perpetuating financial discrimination if we're preventing people from building generational wealth through homeownership.

 

Sean Martin  13:38

As we're, as we're talking about this, I'm picturing. I mean, we have all this data, and we use it to reduce risk, reduce fraud, cut costs, be more efficient and effective. Why can't we use the data to say, here's what we're seeing as well? Because you said the computer doesn't see color? Why not? Why and or? Well, nobody's looking at it. So why can't we look at it, and I want to go, I'll leave that there. But I want to go to another conversation mark on I had looking at, there's technology to this very pertinent, very relevant technology and looking to remove weeds from lakes and rivers and ponds. And the reason I'm bringing that up, is the response to the initial problem of weeds in these bodies of water was chemicals. And humanity didn't think of a different way other than just throw a bunch of chemicals in the lake and kill everything in there. So we can get rid of the weeds. My reason I'm bringing that up as my question is, do you think we're going to have a situation in the world of AI and algorithms where, oh my gosh, this this thing is out of control. It's full of weeds. Let's dump a bunch of AI chemicals in there to clean up the mess that we created.

 

Meredith Broussard15:06

While Well, I am I am imagining the pond in my hometown, and I am imagining it choked with weeds. And I am also imagining the chemical spill that I know happened in the pond in my hometown. And so I am

 

Marco Ciappelli15:25

Thank you,

 

Meredith Broussard15:27

let's I. Alright, so let's take a step back. One of the things that I can, that I can bring to this conversation as a journalist, is a new field of journalism called algorithmic accountability reporting. And the idea behind algorithmic accountability reporting, is that it is possible to hold algorithms accountable. So in the past, the traditional function of the media has been to hold decision makers accountable in an algorithmically mediated world. You know, we need to interrogate algorithms and their creators. So one of the, the organization I mentioned before the markup is doing some of the most cutting edge work in algorithmic accountability reporting. And so one of the things that they're doing is they're looking at the inputs and the outputs of algorithms. They just published a story the other day that looks at tracking, and grocery store reward cards. So you go to the grocery store, it was really good. Wasn't

 

Marco Ciappelli16:40

it make you feel like no, I don't have an account with you their chain, whatever, I buy my food.

 

Meredith Broussard16:49

Exactly, exactly. Because you know, you go to Publix, you buy your Cheerios, like you don't, you don't think that your data is being sold, you don't think your data is being marketed. But actually, every time you're using something that you think is free, that you are the product, right, and so we need more reporting on the systems. And I mean, we need more data privacy overall, we need people to understand kind of the depth of what's happening in these computational systems. And we also need to understand the way that bias works inside these systems. And so bias is kind of hard to talk about, it's hard to have conversations about racism, about sexism, about ableism. These are uncomfortable topics, for many people, but we need to do we need to start having these uncomfortable conversations so that we can build better technology so that we can have a more just world that is more inclusive.

 

Marco Ciappelli17:54

Yeah, those are all great points. And actually, I'm thinking what Shawn started there, with which I didn't know where he was going with that, which often happen. But then it kind of clicked something in my head, because it's like, can we use data to fix data, but then if we do that, we just poisoning the whole thing. And then we depend even more on that, I think that's what I got from what you said, Shawn, and from when you say marry, that is the point that it goes back to where we are as humans, and we always go back to that. So you mentioned something, and I discussed this before the concept of others. And and I would like for you maybe to elaborate a little bit on that into, you know, and I'm assuming the idea is that in technologies, and while we do recognize this diversity, and we put it into the system, the system is going to use it to consider people others and not all equal. So I know it may be a little philosophical here, but I'd like for you to kind of like explain this a little bit more.

 

Meredith Broussard19:04

Well, I think for me, this is about the importance of diversity in the development process in the software development process. One of the examples that I like to use is the example of the racist soap dispenser. I don't know if you saw this viral video. It's a couple years old now. But I two men, I one with light skin and one with dark skin, walk into a bathroom and try and use the soap dispenser. And the man with light skin puts his hand on a soap dispenser and soap comes out man with dark skin puts his hand on the soap dispenser, and no soap comes up. And then you might think Alright, the soap dispenser just broke at that instant because you know, things break. Okay? But no man with dark skin gets a white paper towel, puts it under the soap dispenser and comes out. So the soap dispenser is racist. Now, I don't believe that this was intentional. I don't believe that the soap dispenser makers were like, yes, we think that only people with a light skinned should be able to use our soap dispenser like that's, that's not. That's not reasonable, right. But what I do think happened is that this was a team that had a very homogeneous group of people on it. And they tested it on themselves and said, Oh, look, look, it works for us, it must work for everybody else. So in that kind of situation, if you have more diversity on your development team, if you have a variety of people, a variety of voices in the room, then you have people who can speak up. And of course, you do empower people on your team to speak up, when, when they see an issue, you can't, you can't make people feel silenced. But just the simple fact of diversity can really help.

 

Marco Ciappelli20:54

I'm gonna make a comment here, because I'm thinking there is a middle way a middle, you know, middle position here between technology is gonna resolve technology. And, and we need to do it with society. Now, what I'm thinking is, you know, it's let's touch on the word of the of the day, generative artificial intelligence, and you know, the writing and the chat and the daily and so on. So we do something, and then we create someone else, I think, MIT, I believe, created something that says, well, let's analyze, and we'll see if was done by artificial intelligence. So I'm thinking like, why not create and for what I know, somebody is working on it right now, a system of the use of the data just to say, I'm pointing out that the data here is doing a really bad job, diversity and inclusion. So almost like not the solution, but the tool that we need to improve the way that we that we operate and actually optimize technology. What do you think about

 

Meredith Broussard22:09

that? No, I think that that is I think that the future, the very near future will, we'll see an explosion of tools for algorithmic auditing. So there is there's, I mean, algorithmic auditing exists as a phenomenon. So it is kind of a, it's related to algorithmic accountability reporting. And the idea with algorithmic auditing is you look at the inputs and the outputs of the system, and you evaluate whether it is AI being biased, or you evaluate how it is being biased, and you apply a judgement about that bias. There are some really terrific researchers doing doing work around this dead Raj, he is one of them, Kathy O'Neill, the author of weapons of mass destruction is another one. I've worked a little bit with Kathy's from orca on doing algorithmic auditing. And so you can do algorithmic auditing from the inside or from the outside. As journalists, we often do it from the outside. But if you work inside a company, and you have concerns about okay, is my say, hiring algorithm, you know, excluding people of color from the resumes that had been considered, then you can go into the software, and you can actually run these tests yourself. The world of algorithmic auditing is very close to the world of compliance. I think that in heavily regulated industries, like financial services, I think that's where we're going to see the the first requirements come through around compliance and algorithmic auditing. Now, I realized that saying auditing and compliance has like made a bunch of people's brands turn off, and I apologize for that. Oh, no, you

 

Marco Ciappelli24:13

turn Shawn on, so I know he's going for it.

 

Sean Martin  24:18

Yeah, music to my ears.

 

Meredith Broussard24:20

Fabulous. Fabulous. Okay, well, well, we could definitely nerd out about this a lot. But suffice it to say that we know there are problems inside these systems. Ai, people have mostly pretended they're not there because it's so uncomfortable to confront it, but we do need to confront it. And we are starting to have tools for confronting kinds of pernicious bias inside computational systems.

 

Sean Martin  24:51

So I'll start my my rant here with just the commonalities of auditing and compliance and regulations and, and all that connected to privacy connected to cybersecurity. For many years, those were hammers to get people to do what people thought they should be doing anyway. And the big companies can, the smaller ones get mired in, in all the process and everything else that that, that they have to do just to prove that they're doing something a certain way. And the innovation kind of gets lost, or they they focus on the innovation and deal with the fines later? If that's what they choose. What this makes me think of those another area where there's another driver for doing what's right. So it's not, well, there's always the first driver is money, right. And the company that Hands Soap Dispenser, they missed out on the money because they only really serving a certain group of people, right, and the others weren't going to buy that, that buy that product, then the of the the auditing compliance and hammer that comes with that. The third way is that there's a belief in what you're doing, for the better of humanity, the better of society. And to me that's kind of wrapped up in the ESG, movement, environment, social governance movement. So I don't know from from your experience as a journalist and talking to folks, have you seen much of that broader ESG movement? Diving into the ethical AI and non less bias, I will say non bias, the less biased algorithms.

 

Meredith Broussard26:43

I think that we have had a very, very long period of tech companies policing themselves. We've had a very long period of tech companies saying, oh, yeah, we'll, we'll take care of it, we'll, we'll do voluntary, you know, voluntary compliance with the public good. And what that has led to is, it has led to our current situation where we have mortgage approval algorithms that, that deny people of color more often than they deny white people, we have things like facial recognition used in policing. So this week that I, the week of March, what was it was March 1, I believe, there was a new article in Wired about a case where somebody, again had been arrested wrongly because of a facial recognition match. Right? It's only people with darker skin who seem to be getting mis identified by facial recognition systems. And we know that NIST, the National Institute of Standards and Technology, has audited facial recognition systems and found that they are less accurate for people with darker skin as opposed to people with lighter skin, they are more accurate for men than for women. And they work Worst of all, for women with darker skin, they work best of all for men with light skin. Right? So we have cases of actual harm that people are experiencing at the hands of automated systems, or we have, you know, systems that are supposed to detect welfare fraud, right, but what they're actually doing is denying people access to public benefits. So we really need to kind of take a step back and look at our systems and look at what are the inevitable problems, right, like what are the problems of discrimination that exists in the world? We need to ask ourselves, How could these problems possibly be reflected inside computational systems? And then we can start to do something about it. solely using some of the tools that I talked about in more than a glitch.

 

Marco Ciappelli29:13

Yeah, and I want to start wrapping this conversation because on one side I'm, I'm like, let's let's limit possibly, what we can do or what we should do with artificial intelligence, ideally, not I mean, if we believe in Utopia or dystopia, we have a different approach to things but until we can fix society and be a little dystopian here, which take cultural changes, and I want to be optimistic people are working on it. Most of them and definitely I agree with you, there are good intention and people that are developing all this, all these things, but until then, is is limiting the use of this. On your opinion, the answer even if it's a temporary solution?

 

Meredith Broussard30:04

Well, I, I do not believe that we should stop using technology, right? Like, I'm not one of these people who thinks that we have gone too far down a dangerous track like you will, you will pry my smartphone out of my cold dead hands. Right? I think that it's about using the right tool for the task. Sometimes the ritual for the task is a computer. Sometimes it's something simple, like a book in the hands of a child sitting on the parents lap. You know, one is not inherently better than the other. I think we need to, to build technological systems, I think we need to make our tech systems better. But I also think that we need to pull back sometimes. And we need to stop using technologies that are demonstrably harming people. Right. So like facial recognition and policing, for example, bad idea, because these systems are disproportionately weaponized against communities of color. And it's just exacerbating a problem. That is, that is already huge, right. And so it creates these dangerous feedback loops, that are not getting us toward about our world.

 

Sean Martin  31:21

And it's that loop that I'm picturing in my mind. Because as you're describing a couple of scenarios, I can easily think like the audience and say, I'm not gonna I don't have to worry about fake facial recognition and being arrested, I'm not in any place that I'm doing anything wrong, that that scenario doesn't, doesn't apply to me. Maybe that case, it's somebody wasn't in the right wasn't in the place doing something wrong, and the facial recognition recognition, telling them anyway, and Miss Miss identified them. But my point is, I think as a society, we're a bit numb to certainly cybersecurity and privacy, I have nothing to lose privacy wise. So who cares, right. And until, until we can resonate with a particular scenario on a broad scale, and have a voice on a broad scale to say, that's not right. Don't do that. And I don't know if the government steps in before the society does on those on those types of deals. But at your point, I think that loop needs to be snapped. And something has to something has to happen before before the loop circles in on itself into the center of an explosion. But I don't know if it if it's some bad events, or we're gonna have some other some other activity or some other trends surface that makes that loop work. I don't know if you have any thoughts on it.

 

Meredith Broussard32:54

I think that one of the ways we understand the world is through stories. And so one of the things I've tried to do in in more than a glitches, I've tried to collect stories that have resonated for me stories that have helped me understand the kind of scale of the problems with our algorithmic systems. And I've tried to present them, I got piled up together, but it also organized into different categories. So I'm looking at policing and, and the justice system. I'm looking at medicine, at kind of bias in medicine in the cutting edge of AI based cancer detection. I'm looking at education. And what I hope people will take away from the book is a sense of what's going on inside these tech systems, kind of how the systems work, which of course is explained, as I said before, but also what are the implications for ordinary people's ordinary lives? Because it's very easy to think, oh, yeah, like the algorithms are not coming from my job. But the algorithms are definitely coming for your job. None of us are safe.

 

Marco Ciappelli34:13

Yeah, I think the moral of what you said and the moral what Sean said, it's, yeah, it's nobody worries about it until it actually touched you. And then we're again, very close. And I think we should be smarter than that as a humanity. But if we look at history, I don't know if I'm gonna be so optimistic about it. But I think we are awaken at a faster pace than what we've done in the past. Bottom line here is that you're right, we limit your

 

Sean Martin  34:45

use of these technologies. That's the bottom line.

 

Marco Ciappelli34:49

Yeah. Well, we need stories though. I agree with you. I mean, I always welcome people like you journalists that do raise the questions the right Books are fully people will read the book. That's what we do. We have podcasts we talk to smart people about about these things. And as we always say, if even one person that listen to this comes out of this with more questions and, and start learning more about this topic, you know, we're, we're doing our job, one person at a time, we could do something positive. So, thank you so much for this. I want to invite everybody to check out your book, more than a glitch confronting race, gender and ability bias in tech. And also your other book, which had a pretty interesting title, how computers misunderstands the world, and I think they're very well connected. So you know, an interesting, read a few if you're into this. Meredith, thank you so much.

 

Meredith Broussard35:54

Thank you so much, both of you for a great conversation.

 

Marco Ciappelli35:57

Thank you and everybody else, Thank you, Shawn, and people listening. There'll be note, and links to the book, and Meredith and everything for watching this video if you're listening to the audio or listening to the audio for watching the video, and stay tuned for another where the finances study podcast ITSPmagazine. Thank you very much.

 

sponsor message36:24

Devo unlocks the full value of machine data for the world's most instrumented enterprises. The Devo data analytics platform addresses the explosion in volume of machine data and the crushing demands of algorithms and automation. Learn more@devo.com big crowds award winning platform combines actionable contextual intelligence with the skill and experience of the world's most elite hackers to help leading organizations identify and fix vulnerabilities, protect customers and make the digitally connected world a safer place. Learn more at bug crowd.com. Black Cloak provides concierge cybersecurity protection to corporate executives and high net worth individuals to protect against hacking, reputational loss, financial loss, and the impact of a corporate data breach. Learn more at Black cloak.io

 

Show intro37:26

We hope you enjoyed this episode. If you learned something new and this podcast made you think then share itspmagazine.com with your friends, family and colleagues. If you represent a company and wish to associate your brand with our conversations sponsor one or more of our podcast channels, we hope you will come back for more stories and follow us on our journey. You can always find us at the intersection of technology, cybersecurity, and society