AI expert Meredith Broussard: ‘Racism, sexism and ability are systemic problems’ | Artificial Intelligence (AI)

mEredith Broussard is a data journalist and academic whose research focuses on bias in artificial intelligence (AI). She was at the forefront of raising awareness and sounding the alarm about uncontrolled AI. Her previous book, Artificial ignorance (2018), coined the term “technochauvinism” to describe the blind belief in the superiority of technical solutions to solve our problems. She appeared in the Netflix documentary Coded bias (2020), which examines how algorithms encode and propagate discrimination. Her new book is More Than a Glitch: Confronting Race, Gender, and Proficiency in Technology Prejudice. Broussard is an associate professor at New York University’s Arthur L Carter Journalism Institute.

The message that bias can be embedded in our technological systems is not really new. Why do we need this book?
This book is about helping people understand the very real social harm that technology can do. We’ve had an explosion of wonderful journalism and scholarship about algorithmic bias and the harm people have experienced. I try to take that reporting and thinking to a higher level. I also want people to know that we now have methods to measure bias in algorithmic systems. They are not completely unknowable black boxes: algorithmic auditing exists and can be done.

Why is the problem “more than a glitch”? If algorithms can be racist and sexist because they’ve been trained using biased datasets that don’t represent all people, isn’t the answer more representative data?
A malfunction suggests something temporary that can be easily fixed. I argue that racism, sexism, and ability are systemic problems that are ingrained in our technological systems because they are ingrained in society. It would be great if the solution was more data. But more data won’t fix our technological systems if the underlying problem is society. Take mortgage approval algorithms, which have been found to be 40-80% more likely to reject borrowers of color than their white counterparts. The reason is that the algorithms are trained on data about who has received mortgages in the past and there is a long history of discrimination in lending in the US. We cannot fix the algorithms by putting in better data, because there is no better data.

You argue that we need to be more choosy about the technology we allow into our lives and society. Should we just reject any AI-based technology that encodes bias?
AI is now in all our technologies. But we can demand that our technologies work well – for everyone – and we can make informed choices about whether or not to use them.

I’m excited about the distinction in the European Union’s proposed AI law that divides use into high and low risk based on context. A low-risk use of facial recognition might be to use it to unlock your phone: the stakes are low – you’ll have a passcode if it doesn’t work. But facial recognition in the police would be a risky use that should be regulated or – even better – not used at all because it leads to wrongful arrests and is not very effective. It’s not the end of the world if you don’t use a computer for anything. You cannot assume that a technological system is good because it exists.

There is enthusiasm for using AI to diagnose diseases. But racial biases are also ingrained, including from non-representative datasets (e.g. AIs for skin cancer). will probably work much better on lighter skin, as it usually says so in the training data). Should we try to introduce “acceptable thresholds” for bias in medical algorithms, as some have suggested?
I don’t think the world is ready to have that conversation. We are still at a level where we need to raise awareness of racism in medicine. We need to step back and fix a few things about society before we start freezing it in algorithms. Formalized in code, a racist decision becomes difficult to see or eradicate.

You have been diagnosed with breast cancer and have had successful treatment. After your diagnosis, you experimented with running your own mammograms through an open-source cancer-detection AI and found that it did, in fact, pick up your breast cancer. It worked! So great news?
It was pretty neat to see the AI ​​draw a red box around the area of ​​the scan where my tumor was. But I learned from this experiment that diagnostic AI is a much blunter tool than I imagined, and there are complicated trade-offs. For example, the developers have to make a choice about accuracy rates: more false positives or false negatives? They prefer the former because it’s considered worse to miss something, but that also means that if you have a false positive, you’re going down the diagnostic pipeline, which can mean weeks of panic and invasive testing. Many people envision a streamlined AI future where machines replace doctors. This doesn’t sound appealing to me.

Any hope that we can improve our algorithms?
I’m optimistic about the potential of algorithmic auditing – the process of looking at an algorithm’s inputs, outputs and code to assess it for bias. I did some work on this. The goal is to focus on algorithms as they are used in specific contexts and address the concerns of all stakeholders, including members of an affected community.

AI chatbots are all the rage. But the technology is also riddled with bias. Guardrails added to OpenAI’s ChatGPT were easy to get around. Where did we go wrong?
While more needs to be done, I appreciate the guardrails. This has not been the case in the past, so it is progress. But we also need to stop being surprised when AI messes up in very predictable ways. The issues we see with ChatGPT were anticipated and described by AI ethics researchers including Timnit. [who was forced out of Google in late 2020]. We must recognize that this technology is not magic. It’s put together by humans, it has problems and it’s falling apart.

OpenAI co-founder Sam Altman recently promoted AI doctors as a way to solve the healthcare crisis. He seemed to envision a two-tier healthcare system: one for the wealthy, where they enjoy consulting with human doctors, and one for the rest of us, where we see an AI. Is this the way things are and are you concerned?
AI in medicine doesn’t work very well so if some really rich person says “Hey you can have AI to do your healthcare and we’ll keep the doctors to ourselves” that seems like a problem to me and not something that takes us to a leads a better world. Plus, these algorithms are coming for everyone, so we might as well address the issues.

Leave a Comment