Boston City Council made a praiseworthy decision to protect minorities from discrimination when they banned police from utilizing Facial Recognition Technology (FRT). The technology sounds like a good idea on the surface, and some have even said that it has the potential to streamline the criminal investigation process and make it more equitable. This isn’t what happens in reality, though. Computers are not able to have any racial bias since they are not capable of independent thought, and this gives them the advantage of complete objectivity. The problem is that the software is written by people, and people have biases. FRT is far less accurate when it comes to identifying racial minorities based on their facial features. Using such faulty technology would be detrimental to the best interests of the state and its citizens.
The inaccuracy of FRT algorithms has been well documented. The National Institute of Standards and Technology (NIST) conducted a study of 189 programs written by 99 separate developers, and they found that face recognition algorithms misidentified African-American and Asian faces up to 100 times more frequently than white faces. The misidentification was even more prevalent in African-American females. According to Patrick Grother, an NIST researcher and the primary author of the study, “While it is usually incorrect to make statements across algorithms, we found empirical evidence for the existence of demographic differentials in the majority of the face recognition algorithms we studied.”
Such inaccuracies are particularly problematic when this technology is used by #lawenforcement. Just ask Robert Julian-Borchak Williams, a black Detroit man who was wrongfully arrested in January 2020 in front of his wife and children (ages 2 and 5) because an FRT program misidentified him as a thief. When an officer showed him a photo of the thief (who was also black) caught on a security camera, Williams told them, “I hope you don’t think all black people look alike.” He was locked up in jail for 30 hours and not released until he posted bail the next day. The ACLU of Michigan filed a formal complaint against the Detroit Police Department and requested that the police stop using FRT.
Boston’s City Council wanted to make sure that such incidents won't happen in our city. A lot of people also believe that even when applied correctly, the technology poses a grave threat to the privacy of the citizens. It can be used to track people at a distance without their knowledge or consent. Mass surveillance would be a severe infringement upon all citizens’ right to freedom of movement and speech. United States Immigration and Customs Enforcement officials have been using the technology for years to analyze the drivers’ licenses of millions of people without their knowledge.
It is also important to note that some groups would be all too willing to misuse this technology for their own nefarious purposes. China has FRT to surveil and control an ethnic minority group known as the Uyghurs, 12 million of which live in Xinjiang. Uyghurs are forced to comply with strict surveillance that limits their freedom, have to provide samples of their DNA to the government, and are also forced to download a tracking app on their cellphones. A lot of people are concerned that minority groups in the USA might also be targeted disproportionately with this technology. Senator Ed Markey, who co-introduced The Facial Recognition and Biometric Technology Moratorium Act, said, “facial recognition technology doesn’t just pose a grave threat to our privacy; it physically endangers Black Americans and other minority populations in our country."
Technology can be misused, and a lack of transparency is particularly dangerous for communities of color. There is already a lot of debate about law enforcement officers disproportionately targeting minority communities. In these circumstances, the step to ban the use of FRT is the only logical choice we can make. Without accurate algorithms and police reforms these technologies don't belong in our society.