# The Impact of Algorithms on Society: A Critical Examination
Written on
Chapter 1: The Case of Eric Loomis
In 2013, Eric Loomis was stopped by law enforcement while driving a vehicle connected to a shooting incident he was not involved in. He admitted to attempting to evade an officer and unauthorized use of the vehicle. Although his actions typically would not warrant a lengthy prison term, he received an 11-year sentence, which included six years of incarceration and five years of supervised release.
This harsh penalty was largely influenced by the COMPAS algorithm, which categorized him as a high risk for reoffending. Consequently, Loomis was denied probation and faced a more severe sentence than what his offenses would ordinarily entail.
What does it mean for society to entrust an individual’s destiny to an algorithm? The reliance on machine-generated recommendations, particularly when they appear unjust or inhumane, raises significant ethical concerns. The general populace remains largely uninformed about how algorithms like COMPAS function, as their creators often refuse to share specific details and are not legally obligated to do so.
As algorithms continue to infiltrate various aspects of our daily lives—from finding places to eat to receiving tailored advertisements—questions about their influence and reliability grow. Social media platforms rely on sophisticated algorithms to curate our feeds, while services like Netflix and Amazon provide recommendations based on user behavior. Even dating applications such as Tinder utilize algorithms to propose potential matches.
The implications of algorithms extend to critical domains such as healthcare and the justice system, where algorithmic assessments can lead to severe outcomes, including lengthier prison sentences. Data utilized by these algorithms is frequently gathered from users and sold to third-party data brokers, heightening the risk of data breaches and targeted scams.
To combat these issues, Incogney emerges as a solution that facilitates the management and deletion of personal data from data brokers, thereby offering a means to safeguard privacy through an automated process.
While algorithms are not inherently harmful, there is a pressing need for greater transparency and regulatory oversight. Engineers develop and modify algorithms within a “black box” framework, often with limited insight into their implications. The unpredictability and confidentiality surrounding these algorithms present considerable risks, as companies frequently hide behind the facade of impartiality when challenged.
Section 1.1: Are Algorithms Truly Objective?
Society often regards algorithms as bastions of neutrality, assuming they operate free from human biases. However, the truth tells a different story. Algorithms are not purely objective; they reflect the biases of their creators. For instance, a 2019 algorithm employed in U.S. hospitals exhibited discriminatory tendencies against Black patients, suggesting they required less medical care based on historical spending data, despite race not being a direct factor.
The biases present in algorithms stem from human design and can have profound effects. Tech firms often create algorithms with the primary aim of maximizing advertising revenue by prolonging user engagement. Platforms like YouTube prioritize metrics such as click-through rates and viewing duration, leading to the proliferation of clickbait and even extremist content. Furthermore, Facebook's algorithm perpetuates existing beliefs, fostering echo chambers.
While algorithms are not inherently malevolent and can yield positive outcomes when applied correctly—such as predicting dementia using early-life data or assisting in medical diagnostics—human oversight remains essential. In the case of Eric Loomis, the judge's failure to critically assess the algorithm’s recommendation resulted in an unjust verdict. Though tech companies are starting to confront these challenges with algorithm modifications, ethical considerations and human intervention are vital.
Section 1.2: The Path Forward
To enhance the role of algorithms in society, it is crucial to:
- Prioritize human welfare over profit.
- Challenge the assumption that algorithms are infallible and rational.
- Demand transparency and accountability in the design of algorithms.
Algorithmic systems should be designed to serve human interests rather than exploit our vulnerabilities. It is imperative that we engage in critical evaluation and reform to ensure algorithms positively impact society.
As Tristan Harris, co-founder of the Center for Humane Technology, articulates, we may be waiting for the moment when technology overshadows human capability. However, we are already witnessing a shift where technology exploits human weaknesses. This turning point is occurring now, leading to diminished attention spans, strained relationships, and fractured communities, thereby undermining our humanity.
The first video titled "Algorithms are Destroying Society" delves into the ramifications of algorithmic decision-making and its effect on individuals and communities.
The second video, "This Is Why Algorithms Are Destroying Society," further explores how these systems impact our daily lives and the necessity for reform.