In the digital age, the promise of big data and healthcare automation has transformed many facets of society, offering solutions to age-old problems in law enforcement, welfare, and public services. However, two groundbreaking books, Sarah Brayne’s Predict and Surveil and Virginia Eubanks’ Automating Inequality, sound an alarm on the dangers of these artificial intelligence technologies. Both authors provide insightful critiques of how algorithmic systems, lauded for their efficiency and objectivity, ultimately deepen social inequalities, target marginalized groups, and fail to deliver on their promises of fairness. This Doctors of Courage book review delves into both works, offering a comprehensive review while highlighting their shared critiques of algorithmic methodologies, and harshly criticizing the inherent flaws in the design and application of such unchecked Government artificial intelligence systems.
At the heart of both books is the central claim that artificial intelligence algorithms, while marketed as neutral and objective, are far from impartial. Instead, they reflect and amplify pre-existing social biases. In Predict and Surveil, Brayne offers an unprecedented inside look at how predictive policing works within the Los Angeles Police Department (LAPD). Using ethnographic methods, including ride-alongs and interviews, she reveals how AI predictive algorithms are used to forecast crime, identify potential suspects, and guide patrol deployment. The central theme is that artificial intelligence algorithms allow for the intensification of surveillance, transforming law enforcement from reactive policing into proactive, and at times, preemptive action based on statistical patterns.
In Automating Inequality, Eubanks focuses on how automated systems have infiltrated public welfare, housing, and child protection services. Using three case studies, from Indiana’s welfare system to Los Angeles’ homeless programs, to Allegheny County’s child welfare algorithms, she shows how these systems are not neutral problem-solvers, but tools of social control aimed at managing, disciplining, and punishing the poor. Automated decision-making processes, she argues, perpetuate the same moral judgments and systemic discrimination that have historically governed the treatment of marginalized communities.
Both Brayne and Eubanks expose a common flaw in the application of AI algorithms, their opacity. In Predict and Surveil, Brayne is deeply critical of how law enforcement agencies have integrated private tech companies like Palantir, which create and maintain the AI algorithms that underpin predictive policing. This reliance on proprietary algorithms means that the processes by which decisions are made are hidden from public view, effectively creating a “black box” where the criteria for suspicion or intervention are inscrutable, even to the officers who use them. Brayne’s ethnographic research reveals the danger of this opacity because when officers cannot understand how artificial intelligence algorithms make decisions, they cannot question or challenge their validity. The AI algorithm becomes an unquestioned authority, potentially embedding biases into its operations.
Eubanks offers a similarly scathing critique of algorithms in welfare systems. In her case study of Indiana’s automated welfare system, a glitch in the software led to over a million denials of benefits within three years, many of which were unjustified. The AI system flagged minor paperwork errors or bureaucratic missteps as “failures to cooperate,” resulting in life-threatening consequences for individuals dependent on those benefits. Eubanks rightly points out that this is not just a technical flaw but a profound ethical failure. The opacity of these AI algorithms prevents individuals from contesting decisions or understanding why they were made, stripping people of their right to due process.
In Predict and Surveil, Brayne demonstrates how the use of historical crime data leads to a feedback loop where certain communities, often poor and predominantly non-white, are subject to heightened surveillance and policing. Since these areas are more heavily policed, more crimes are detected and fed back into the system, further justifying increased police presence in those neighborhoods. This cycle creates a self-fulfilling prophecy where the AI algorithm confirms the biases encoded in its data, thus reproducing racial and socioeconomic disparities in policing.
Eubanks extends this critique to the realm of welfare services. She argues that AI algorithms, rather than alleviating poverty, institutionalize it, creating a gigantic caste system. By targeting the poorest and most vulnerable populations with hyper-vigilant surveillance, AI automated systems become tools of punishment rather than support. Eubanks’ case study of the Los Angeles homeless system shows how an AI algorithm designed to prioritize housing resources fails the very people it is supposed to help. The system uses data points that do not necessarily reflect individual need but instead mirror societal assumptions about who is “deserving” of assistance. Once again, the AI algorithm reinforces existing prejudices, resulting in a “digital poorhouse” where poverty is managed but never eradicated.
Perhaps the most damning critique in both books is that U.S. Department of Justice’s artificial intelligence, predictive algorithms, in their current form, erode the very notion of justice. Both Brayne and Eubanks argue that by replacing human discretion with algorithmic decision-making, we risk losing the ethical and moral deliberation that should underpin decisions in law enforcement and welfare services. One of the most powerful arguments made by both authors is that artificial intelligence algorithms are not neutral, they are inherently political. Both Predict and Surveil and Automating Inequality reveal how the data that feeds these systems reflects pre-existing social inequalities, which AI algorithms then perpetuate and exacerbate.
In Predict and Surveil, Brayne points out that AI algorithms are increasingly being used to predict not only where crimes might occur but who might commit them, effectively criminalizing individuals based on statistical probabilities rather than actual actions. This shift from presumption of innocence to prediction of guilt undermines fundamental legal principles and civil liberties. The book exposes chilling examples of how this can go wrong where individuals are being flagged as suspects or placed under surveillance without any evidence of wrongdoing, simply because they fit a statistical profile.
Eubanks’ critique in Automating Inequality is similarly potent. She argues that by automating welfare systems, we reduce individuals to data points, stripping away their humanity and reducing their lives to a series of AI algorithmic decisions. Welfare systems that once relied on social workers to evaluate need and distribute resources now depend on impersonal AI algorithms that make life-and-death decisions without context or compassion. In one harrowing example, Eubanks recounts the story of a family who had their Medicaid benefits cut off due to a minor paperwork error, despite the fact that their child depended on Medicaid for life-saving treatment.
What emerges from both books is the harsh reality that artificial intelligence algorithms are not the neutral, efficient tools they are made out to be. Instead, they create a veneer of objectivity while hiding the inherent biases and inequalities built into their systems. Both Brayne and Eubanks show that the efficiency promised by AI algorithmic decision-making often comes at the cost of justice and fairness. In fact, far from eliminating human error, AI algorithms introduce new forms of error, ones that are harder to detect, more difficult to contest, and disproportionately affect the most vulnerable members of society.
Both Predict and Surveil and Automating Inequality call for a rethinking of how we use artificial intelligence technology in public services and law enforcement. Brayne suggests reforms that would make predictive policing more transparent and accountable, such as using big data to police the police rather than the public. Eubanks goes further, arguing that we need to dismantle the “digital poorhouse” or caste system entirely and return to systems that prioritize human dignity and social justice over artificial intelligence algorithmic efficiency.
In sum, Predict and Surveil and Automating Inequality both offer powerful critiques of the use of big data and AI algorithms in public systems. By exposing the methodological flaws, inherent biases, and ethical failings of these artificial intelligence predictive technologies, Brayne and Eubanks make it clear that AI algorithms, far from solving social problems, are tools that exacerbate inequality, criminalize poverty, and erode civil liberties.
The harsh reality is that AI algorithms, when deployed without adequate oversight or accountability, become agents of injustice. If we are to create a fairer society, we must not only reform these systems but also critically re-examine the role of technology in governance. The seductive promise of artificial intelligence algorithmic objectivity must be resisted, and we must prioritize human judgment, ethical decision-making, and social justice above all else.
The Author received an honorable discharge from the U.S. Navy where he utilized regional anesthesia and pain management to treat soldiers injured in combat at Walter Reed Hospital. The Author is passionate about medical research and biotechnological innovation in the fields of 3D printing, tissue engineering and regenerative medicine.
One pretty key facet of the problem is left out of the critique here, it’s the reason government will not acknowledge that this technology is only making everything it touches worse for the whole of society & will continue forward with automation at all costs. This new system is designed to finally put the last of the nails into the accountability coffin, by removing any responsibility for mistakes and failures entirely and permanently from the people who run it-no longer will they be forced to bear the burdens of their decision-making…our laws make us the most litigious society on earth, and by automating the decision-making through a process that official policy considers entirety “evidence-based,” thoroughly impartial & free of the possibility for “human error,” the masters of mankind secure & guarantee their absolute freedom from risk of liability.