As the digital landscape expands, tech platforms have done poor jobs in dealing with skin-color and gendered biases in the algorithm.
Perhaps the movie Matrix was right. In it, they told the story of how humans relied heavily on artificial intelligence. So much so, their AI programs turned against them. In the case of race and gender, it already has. But the more troubling issue is that top tech companies are doing little to remedy a growing issue. This more concerning as the world rapidly moves from a digital age to the Age of AI; especially after the global pandemic sent us all online.
Just eight years ago, law enforcement using AI’s facial recognition programs loomed as a serious question when they could not identify who bombed the 2013 Boston Marathon. Rather, a victim identified Tamerlan Tsarnaev. A Soviet-immigrant from the Caucasus Mountains along with his brother, Dzhokhar Tsarnaev, were linked to the terrorist attack. While tech companies have advanced AI technology considerably, there are some critical particulars that have been left out.
For years, darker-skinned tech users pointed out racial biases in artificial intelligence. A 2018 study by MIT Media Lab’s Civic Media, found that facial-analysis software showed an “error rate of 0.8 percent for light-skinned men, 34.7 percent for dark-skinned women.”
The lead of that team, Joy Buolamwini, further showed the disparities in the Netflix documentary, Coded Biases. Buolawmwimi, who has now left MIT Media Lab and founded the Algorithmic Justice League, tackles the issue of how AI inaccurately identifies browner-hued people. Not only does she point out AI’s inherent flaws, but also details the issue with transparency in how the technology is used and regulated.
“So it’s not just the question of having accurate systems,” Buolamwini said to Business Insider when talking about her challenging Amazon to be more transparent in the data they have collected using AI. “How these systems are used is also important.”
Buolamwini tweeted an incident about Robert Williams, a Detroit man who was arrested via the use of AI, but was wrongly identified by law enforcement’s facial recognition technology.
. . . .
“As a black man accosted by police the outcome could have been fatal & still the consequences of face misidentification are indelible [and] irreversible,” said Buolamwini. What Buolamwini noticed is that AI has advanced for some people.
In 2017, an FBI official told Congress that it used Idemia, a scan recognition software that had a history of not seeing “all faces clearly.” In particular, it did not consistently scan darker-skinned faces, and was more than likely to misconstrue the facial images of Black women. How did AI read Black faces? They didn’t. The programs blanked out the faces.
Added to making black and brown faces invisible is gendering the value of digital faces. “We found that facial analysis services performed consistently worse on transgender individuals, and were universally unable to classify non-binary genders,” said lead author Morgan Klaus Scheuerman in University of Colorado Boulder study on facial recognition. “While there are many different types of people out there, these systems have an extremely limited view of what gender looks like.”
Stefan Maraj who writes for Blacks in Technology, said that misidentifications in AI caused Amazon’s algorithms to report that more than half of the world was pregnant to due consumers’ increased purchases and sanitizers and hand wash. Maran says that these miscalculations have “become frequent features of the industry press.” Even in something as small as creating algorithms around ice cream can be difficult for a tech program that cannot understand the concept of weather.
Both Buolawmwimi and Maraj call for more transparency in how the technology is used and regulated. “AI bias is certainly concerning, but only if we give AIs too much autonomy,” said Maraj who proposes that “as we automate our systems, these mistaken interventions become embedded in our algorithms, which is why it’s so important that we design our systems to be auditable, explainable and transparent.”
On the other hand, companies like Amazon and Facebook have not shared the data they collect on the public. The 2018 data abuse scandal in Facebook and the multiple lawsuits regarding a violation of privacy plagues one of the largest tech companies. Last year Facebook’s new tracking tool which siphons information to private companies has been criticized. Privacy International, a digital watchdog wrote in a statement printed by Wired, “Without meaningful information about what data is collected and shared, and what are the ways for the user to opt-out from such collection, Off-Facebook activity is just another incomplete glimpse into Facebook’s opaque practices when it comes to tracking users and consolidating their profiles.”
We’re raising money for Ark Republic and Black Farmers Index. We need your help to keep the wheels churning and the stories flowing. Please donate to organizations committed to keeping you informed with rich, robust stories and great connections to empowered people.