AI Is Empire’s Attempt to Automate Obedience

AI Is Empire’s Attempt to Automate Obedience

In a world where capital demands obedience and empire requires total visibility, artificial intelligence has become the most sophisticated weapon in the arsenal of the state. Sold to the public as innovation, AI is the scaffolding of a new digital, authoritarian age, and all that’s required is all your data; it tracks your face, your fingerprints, flags your words, and is supercharging the surveillance state not with jackboots but with code. AI technologies have enabled a scale and precision of surveillance that was previously unimaginable. From facial recognition software to predictive algorithms that police content, AI is an obedient enforcer and the invisible backbone of modern state power in the United States, which has been quietly transforming the machinery of social control. AI doesn’t just monitor our behavior, it shapes it, and the repression it produces isn’t an exception, but something coded into the design.

In the US, predictive policing algorithms have played a central role in law enforcement, border control, and counterterrorism, namely in cities like Los Angeles and Chicago where historic arrest data is analyzed in order to predict crimes. But because this data reflects decades of racist policing practices, these algorithms end up reinforcing existing racial disparities with no oversight. A 2016 study by ProPublica on COMPAS, a popular risk assessment tool used in criminal courts, showed it falsely flagged Black defendants as future criminals at nearly twice the rate of white defendants.

Along with AI-controlled predictive policing, facial recognition technology is also used with little regulation. The FBI has access to over 400 million facial images, including driver’s licenses, passport photos, and mugshots, by way of both government and commercial databases. According to a 2019 Georgetown Law report, this FBI dragnet disproportionately affects communities of color and is often inaccurate, especially when identifying Black and brown faces. Despite growing concern, these technologies continue to expand with bipartisan support. During the 2020 George Floyd protests, federal agencies used facial recognition tools and AI-enhanced social media monitoring to track and arrest protesters. Data scraping companies like Dataminr, which use machine learning to shadow online activity, have provided real-time feeds to police departments to monitor Gaza protests, effectively targeting public speech in an attempt not only to target activists but to automate obedience.

The public square once celebrated by so-called Western democracies—spaces where dissent could flourish, and communities could form outside the control of the state or market—are being systematically dismantled by AI technologies. Automated systems now monitor protests, public transit, and city streets. In London, real-time facial recognition systems have been trialed to scan passersby without their consent, in direct collaboration with American tech firms. In the US, Amazon’s Ring doorbells and police partnerships have created a crowdsourced surveillance network that turns neighborhoods into hyper-monitored residential areas. According to the Electronic Frontier Foundation, over 2,000 police departments have partnered with Ring as of 2022, allowing them warranties access to user footage. Despite claims of “neutral data,” the use of AI by law enforcement has deepened existing systems of punitive control. The ACLU and MIT have revealed that these tools have error rates as high as 35 percent, underlining well-documented inaccuracies that have reinforced a carceral system that makes it harder for defendants to challenge their outcomes in court.

Despite how often AI is marketed as “neutral,” the data proves these technologies are direct reflections of the power structures that design and deploy them. The technocratic class promotes AI as not only unbiased but inevitable, but this is a political decision and not a natural evolution. Take for example social media moderation run by AI on platforms like Instagram, Facebook and Tiktok: automated content filters flag, suppress, and remove posts that challenges state violence, or highlights marginalized struggles, all under the vague banner of “community standards.” Those who took part in the widespread student protesters for Palestine saw their content disappear or be systemically throttled to the point it became algorithmically buried due to automated moderation systems that wouldn’t distinguish between incitement and resistance. Keeping all this in mind, these platforms rarely disclose how their algorithms work, resulting in a chilling effect where users begin to censor themselves for fear of being made invisible.

Resisting AI will require fighting for radical transparency, public ownership of technology, and the abolition of surveillance capitalism which would require—at the very least—the regulation of tech monopolies. It also means the reclamation of our social world. Platforms designed to commodify all aspects of our daily lives should be replaced with systems that foster genuine dialogue, empathy, and solidarity. The goal cannot be to make AI “more humane”, but to critically reduce our dependence on such technologies and reorient society toward human-centered forms of connection. The failures and troubling aspects of AI don’t signify a crisis of technology, but of political imagination. Reclaiming the future means refusing to delegate our collective responsibilities to machines designed to serve power.

 
Join the discussion...