The Biases Within

By Tamy Phung

At the end of Coded Bias, the documentary noted that, "There is still no U.S. federal regulation of algorithms." People see the way surveillance states such as China operate, and they're relieved that they don't live in such a country. However, as the documentary mentioned, we're all being "scored" through algorithms in some form or another, even in the United States; the only real difference is that China is transparent about it. In reality, many people do not realize just how much control emerging technologies have over their daily lives, and it's all because there are simply no regulations for it.

Coded Bias investigated the biases of machine learning algorithms and how it discriminates against certain groups of people. Such algorithms are biased not because the designers are racists or the algorithm itself is flawed, but because the data in which algorithms are programmed upon is biased. There is a quote in the documentary by Meredith Broussard, the author of Artificial Unintelligence, that encapsulates the reason for algorithmic biases: "The machine is simply replicating the world as it exists, and they're not making decisions that are ethical. They're only making decisions that are mathematical." Therefore, as there are biases in the world, there are biases in algorithms.

The fact is, there will always be biases whether it is intended or not, and by allowing the use of machine learning algorithms without proper safeguards and standardization, those who have the power to control these technologies will ultimately have the power to control those that do not. This circles back to the institutionalized racism that we continue to fight against today. In Coded Bias, Virginia Eubanks, the author of Automating Inequality quotes, "the future is already here, it's just not evenly distributed." What she believes this to mean is that the most invasive and surveillance-focused technologies are being tested in poor communities in which there is little respect for people's rights, and only when the technology is successful, will it be implemented in other communities. For instance, the landlord of Atlantic Towers in Brooklyn, New York adopted a facial recognition system with heavy surveillance, but only in this particular building in Brownsville which is also a predominantly Black and brown area. Moreover, not only can facial recognition algorithms possess biases, but algorithms that determine college entry or credit eligibility can be just as biased, if not more.

It's the biases, inaccuracies, and discriminatory repercussions that make these algorithms unfit for the public, and hence in need for ethical reevaulation. In the documentary, the facial recognition algorithms set up on the streets of London were highly inaccurate. Yet, the police have implemented this system without proper regulation or even consent from the public. According to Coded Bias, "Over 117 million people in the U.S. have their face in a facial recognition network that can be searched by police. Unwarranted using algorithms that haven't been audited for accuracy." What's more frightening is that companies don't even know how algorithms essentially work which makes regulating algorithms that much more difficult. Although facial recognition and other algorithms used for security can be beneficial for the public, it is hardly as advanced, regulated, or accurate enough to use at such large scales.

Next
Next

Living to Live: The Cost of Virtual life