Facial Recognition Bias

Facial recognition technology is racially biased and needs to be discontinued. In the meantime, it needs to be regulated.
IBM, Amazon, and Microsoft have announced partial and temporary halts in use of their facial recognition programs (in Amazon and Microsoft's cases, for police use only, no mention of wider use, such as advertising). Will it stick? Who knows, but It's a start.
Two days ago, IBM announced in a letter to Congress that it'll no longer offer "general purpose" facial recognition or analysis software and detailed policy proposals to "advance racial equality in our nation." Yesterday, partially following suit, Amazon announced (via a notably brief post) a 1-year ban on police use of Reknogition, its own facial recognition software. A day later, Microsoft emphasized thatthey don't and won't sell the tech to police departments "in the United States"... until there's a national law regulating it. Fun fact: Microsoft criticized the technology's racial biases several years ago but still invests in it internationally.
These are just three of many companies that've been developing, promoting, and selling the technology to law enforcement agencies worldwide for years.
This technology has long been proven to be racially biased, disproportionately misidentifying people with darker skin color, especially women. It's not used just by law enforcement - it's increasingly used in automated decisions like hiring and lending. This is one of many issues with A.I.: algorithmic bias. People of color, especially black people, are disproportionately affected.
In 2018, the ACLU found that Amazon's technology misidentified Congress members, and black members were disproportionately misidentified. MIT found that "gender was misidentified in less than 1% of lighter-skinned males; in up to 7% of lighter-skinned females; up to 12% of darker-skinned males; and up to 35% in darker-skinner females." Internationally, the Chinese government uses facial and gait recognition to track and detain the Uyghur Muslim population and surveil the rest of the population as well. These are just several examples of how dangerous this technology can be.
This technology needs to be discontinued or at the very least, regulated.
Sources: The Intercept, NPR, the Washington Post.
IBM, Amazon, and Microsoft have announced partial and temporary halts in use of their facial recognition programs (in Amazon and Microsoft's cases, for police use only, no mention of wider use, such as advertising). Will it stick? Who knows, but It's a start.
Two days ago, IBM announced in a letter to Congress that it'll no longer offer "general purpose" facial recognition or analysis software and detailed policy proposals to "advance racial equality in our nation." Yesterday, partially following suit, Amazon announced (via a notably brief post) a 1-year ban on police use of Reknogition, its own facial recognition software. A day later, Microsoft emphasized thatthey don't and won't sell the tech to police departments "in the United States"... until there's a national law regulating it. Fun fact: Microsoft criticized the technology's racial biases several years ago but still invests in it internationally.
These are just three of many companies that've been developing, promoting, and selling the technology to law enforcement agencies worldwide for years.
This technology has long been proven to be racially biased, disproportionately misidentifying people with darker skin color, especially women. It's not used just by law enforcement - it's increasingly used in automated decisions like hiring and lending. This is one of many issues with A.I.: algorithmic bias. People of color, especially black people, are disproportionately affected.
In 2018, the ACLU found that Amazon's technology misidentified Congress members, and black members were disproportionately misidentified. MIT found that "gender was misidentified in less than 1% of lighter-skinned males; in up to 7% of lighter-skinned females; up to 12% of darker-skinned males; and up to 35% in darker-skinner females." Internationally, the Chinese government uses facial and gait recognition to track and detain the Uyghur Muslim population and surveil the rest of the population as well. These are just several examples of how dangerous this technology can be.
This technology needs to be discontinued or at the very least, regulated.
Sources: The Intercept, NPR, the Washington Post.