Daily Star Books
Review: Nonfiction

Tech bias: not a glitch, but a structural problem

Review of ‘More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech’ (The MIT Press, 2023) by Meredith Broussard
Design: DS Books

The first time I read about the idea of technology being racist, I found the concept to be quite absurd. I guess that's understandable because I grew up in a world where technology to me seemed like a black box containing the answer to everything from racism to ableism, and even misogyny. From hearing aids to cutting-edge policing technologies and combating crimes, technological advancements knew no bounds, and AI, in fact, was the future. But of course, with time, cracks did start to show in this magical world created by tech companies, largely led by white males. 

For example, news started pouring in about the discriminatory nature of facial recognition technologies and the sensors used in various touch recognition software. Personally, I was an optimist even when the cracks showed—I reasoned that with further technological advancement, we could cement these cracks. Like myself, a lot of people might presume these anecdotes of discrimination that keep popping up as mere glitches. But, in her book More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech, Professor Meredith Broussard argues against this very logic. The technological bias that keeps putting its head out when you look deeper is no glitch, she says. Because, "a glitch means it's a temporary blip, something unexpected but inconsequential. A glitch can be fixed. The biases embedded in technology are more than mere glitches; they're baked in from the beginning. They are structural biases, and they can't be addressed with a quick code update." In the book, she addresses these issues head-on and sheds light on how the biases present in the real world can manifest inside our computational systems as well. 

This makes sense because technology is not born out of a vacuum, rather from humans. And humans have biases. No matter how hard one tries to rid themselves of biases, unlearning them completely is still a continuous process that requires ongoing work. This is something Professor Broussard emphasises in her book as well. When it comes to relieving the world of racist technologies in particular, just being non-racist is not enough; one has to constantly engage in a thoughtful self-assessment of one's beliefs regarding race, and actively take measures to eradicate racist behaviours. 

Along with being an author of multiple books, Meredith Broussard, a data journalist, holds the position of associate professor at New York University's Arthur L. Carter Journalism Institute. Additionally, she serves as the research director for the NYU Alliance for Public Interest Technology. Broussard's experience in the tech industry gives her a deep understanding of how bias can manifest in technology. She has worked on a number of projects that have exposed the ways in which algorithms can be biassed against certain groups of people. She has also appeared in the 2020 Netflix documentary Coded Bias which—featuring researchers and advocates—delves into the investigation of how algorithms encode and perpetuate bias. 

With statistics backing her up, Broussard does a stellar job of portraying this bias for the readers with stories from individuals who have faced such discrimination. The book opens with the story of Robert Julian-Borchak Williams who gets wrongfully identified by a police facial recognition technology and gets taken into custody. Later it turns out that the technology used itself was not adept at identifying individuals with darker skin tones. From there, she relates the story of high school student Isabel Castañeda and how she was the victim of unfair grading during her International Baccalaureate (IB) exam due to the involvement of discriminatory grading algorithms. To shed light on ability bias and tech, Professor Broussard recounts the story of an Apple employee, Richard Dahan, who is part of the deaf community. 

Using these individual examples, she explores various accounts where technology is only trained to recognise lighter skin tones, mortgage-approval algorithms promote discriminatory lending practices, grading systems that discriminate based on socio-economic factors, and gender bias in databases that make it difficult for minority groups to identify themselves in databases that leads not only to identity crises but also wrong diagnosis in cases where AI is utilised for medical diagnosis. 

Nowhere in the book does she present that the discriminations arise from malice of any sort but rather ignorance and the belief in the idea that technology is the answer to all social problems. This sort of belief system is termed "technochauvinism" and throughout the book, she defies the idea that computers are better than humans. She asserts, "Technochauvinism is usually accompanied by equally bogus notions like 'algorithms are unbiased' or 'computers make neutral decisions because their decisions are based on math.' Computers are excellent at doing math, yes, but time and time again, we've seen algorithmic systems fail at making social decisions." 

Unfortunately, when it comes to presenting solutions for deterring tech bias, the book falls short. The two main solutions she presents are reducing the dependence on technology and secondly introducing algorithmic auditing. With algorithmic auditing, she talks about creating public interest technology. Holding individuals accountable for their creations of pervasive technology and having stricter and more diverse laws and policies is definitely the way to go. She gives the example of the financial and insurance sector and how auditing has kept everything in check in that sector. But, it stands in contrast with her earlier arguments of human bias and we know that such bias does exist even in the complex auditing systems placed in the financial and insurance world. 

In addition, it's worth noting that the book and the stories it presents are predominantly centred around the United States. While this isn't inherently an issue, the argument she makes concerning discrimination and technology could have been further enriched if she had incorporated narratives from diverse cultures as well.

The book cites several big names in the field such as Ruha Benjamin, Safiya Umoja Noble, and Cathy O'Neil. It serves as a great stepping stone for understanding technological bias for anyone who might be interested in the topic while providing recommendations for further study for the ones who want to delve deeper. But, when trying to bring a fresh perspective to the story, it didn't quite hit the mark. 

Tasnim Odrika is a biochemist by day and a writer by night. Reach her at odrika02@gmail.com.

Comments

Review: Nonfiction

Tech bias: not a glitch, but a structural problem

Review of ‘More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech’ (The MIT Press, 2023) by Meredith Broussard
Design: DS Books

The first time I read about the idea of technology being racist, I found the concept to be quite absurd. I guess that's understandable because I grew up in a world where technology to me seemed like a black box containing the answer to everything from racism to ableism, and even misogyny. From hearing aids to cutting-edge policing technologies and combating crimes, technological advancements knew no bounds, and AI, in fact, was the future. But of course, with time, cracks did start to show in this magical world created by tech companies, largely led by white males. 

For example, news started pouring in about the discriminatory nature of facial recognition technologies and the sensors used in various touch recognition software. Personally, I was an optimist even when the cracks showed—I reasoned that with further technological advancement, we could cement these cracks. Like myself, a lot of people might presume these anecdotes of discrimination that keep popping up as mere glitches. But, in her book More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech, Professor Meredith Broussard argues against this very logic. The technological bias that keeps putting its head out when you look deeper is no glitch, she says. Because, "a glitch means it's a temporary blip, something unexpected but inconsequential. A glitch can be fixed. The biases embedded in technology are more than mere glitches; they're baked in from the beginning. They are structural biases, and they can't be addressed with a quick code update." In the book, she addresses these issues head-on and sheds light on how the biases present in the real world can manifest inside our computational systems as well. 

This makes sense because technology is not born out of a vacuum, rather from humans. And humans have biases. No matter how hard one tries to rid themselves of biases, unlearning them completely is still a continuous process that requires ongoing work. This is something Professor Broussard emphasises in her book as well. When it comes to relieving the world of racist technologies in particular, just being non-racist is not enough; one has to constantly engage in a thoughtful self-assessment of one's beliefs regarding race, and actively take measures to eradicate racist behaviours. 

Along with being an author of multiple books, Meredith Broussard, a data journalist, holds the position of associate professor at New York University's Arthur L. Carter Journalism Institute. Additionally, she serves as the research director for the NYU Alliance for Public Interest Technology. Broussard's experience in the tech industry gives her a deep understanding of how bias can manifest in technology. She has worked on a number of projects that have exposed the ways in which algorithms can be biassed against certain groups of people. She has also appeared in the 2020 Netflix documentary Coded Bias which—featuring researchers and advocates—delves into the investigation of how algorithms encode and perpetuate bias. 

With statistics backing her up, Broussard does a stellar job of portraying this bias for the readers with stories from individuals who have faced such discrimination. The book opens with the story of Robert Julian-Borchak Williams who gets wrongfully identified by a police facial recognition technology and gets taken into custody. Later it turns out that the technology used itself was not adept at identifying individuals with darker skin tones. From there, she relates the story of high school student Isabel Castañeda and how she was the victim of unfair grading during her International Baccalaureate (IB) exam due to the involvement of discriminatory grading algorithms. To shed light on ability bias and tech, Professor Broussard recounts the story of an Apple employee, Richard Dahan, who is part of the deaf community. 

Using these individual examples, she explores various accounts where technology is only trained to recognise lighter skin tones, mortgage-approval algorithms promote discriminatory lending practices, grading systems that discriminate based on socio-economic factors, and gender bias in databases that make it difficult for minority groups to identify themselves in databases that leads not only to identity crises but also wrong diagnosis in cases where AI is utilised for medical diagnosis. 

Nowhere in the book does she present that the discriminations arise from malice of any sort but rather ignorance and the belief in the idea that technology is the answer to all social problems. This sort of belief system is termed "technochauvinism" and throughout the book, she defies the idea that computers are better than humans. She asserts, "Technochauvinism is usually accompanied by equally bogus notions like 'algorithms are unbiased' or 'computers make neutral decisions because their decisions are based on math.' Computers are excellent at doing math, yes, but time and time again, we've seen algorithmic systems fail at making social decisions." 

Unfortunately, when it comes to presenting solutions for deterring tech bias, the book falls short. The two main solutions she presents are reducing the dependence on technology and secondly introducing algorithmic auditing. With algorithmic auditing, she talks about creating public interest technology. Holding individuals accountable for their creations of pervasive technology and having stricter and more diverse laws and policies is definitely the way to go. She gives the example of the financial and insurance sector and how auditing has kept everything in check in that sector. But, it stands in contrast with her earlier arguments of human bias and we know that such bias does exist even in the complex auditing systems placed in the financial and insurance world. 

In addition, it's worth noting that the book and the stories it presents are predominantly centred around the United States. While this isn't inherently an issue, the argument she makes concerning discrimination and technology could have been further enriched if she had incorporated narratives from diverse cultures as well.

The book cites several big names in the field such as Ruha Benjamin, Safiya Umoja Noble, and Cathy O'Neil. It serves as a great stepping stone for understanding technological bias for anyone who might be interested in the topic while providing recommendations for further study for the ones who want to delve deeper. But, when trying to bring a fresh perspective to the story, it didn't quite hit the mark. 

Tasnim Odrika is a biochemist by day and a writer by night. Reach her at odrika02@gmail.com.

Comments