Ready or Not, AI Government is Already Here
- by CounterPunch
- May 15, 2026
- 0 Comments
- 0 Likes Flag 0 Of 5
ous systems to improve efficiency and competitiveness.
Albania’s Diella, for example, is a virtual “‘minister’ in charge of tackling corruption” in Albanian Prime Minister Edi Rama’s new cabinet, according toAl Jazeera. Her inaugural address to parliament in 2025 drew international attention. Running on OpenAI models and Microsoft’s cloud infrastructure, she is being seen as a sign of “progress.” While domestic support is mixed, it has given AI governance a public face that encourages normalization. “Right now, Diella is just a chatbot, not an autonomous system. Artificial intelligence could support government decisions if properly trained and monitored, but the real issue is transparency: We don’t know what data it relies on or who is responsible for maintaining it,” Besmir Semanaj, who has 17 years of experience in information technology, told Deutsche Welle.
Since the 1990s, law enforcement agencies across the U.S. and around the world have meanwhile evolved their use of discriminative/predictive AI. By monitoring personal data like travel, finances, and communications, individual and regional risk scores are generated to direct police resources. In 2025, the British government admitted to developing a “homicide prediction project,” using data to flag people considered capable of murder, while companies like Palantir and Babel Street sell systems with similar capacities.
Increasing automation is expanding practical autonomy among AI systems. Police robots, from Singapore’s patrol bots to Miami’s autonomous security vehicles, are equipped with facial and vehicle recognition technology and can monitor public areas and alert police in real time.
Automated AI is also prominent in the legal system, directly impacting human liberty. In the U.S., bail and sentencing rely on partial algorithmic risk tools, like Arnold Ventures’ Public Safety Assessment tool, which uses nine objective factors to predict whether defendants may miss court or commit new crimes. AI tools such as COMPAS, PRIME, and HARMLESS perform similar functions.
The Michigan Joint Task Force on Jail and Pretrial Incarceration’s review of statewide arrest and court data, along with other documents, however, raised concerns “about the accuracy of Arnold Ventures’ assertion and demonstrates the potential harms of using past criminal history as a risk assessment input.”
AI judicial reasoning is also used in divorce settlements. Australia’s Split Up software, developed in the 1990s, later inspired tools like Amica, a government-backed platform that uses financial inputs and case precedents to suggest a split of assets.
Brazil’s Victor Program helps the Supreme Federal Court rapidly classify cases. It analyzes “compliance with the constitutional requirements of admissibility, and [accelerates] analysis of cases that reach the Supreme Court by using document analysis and natural-language processing tools,” according to the Oxford Institute of Technology and Justice. China goes further, with its “smart courts” integrating AI extensively into document drafting, evidence sorting, and case review. Automated analyses of case files are given to judges alongside similar past rulings and recommended outcomes to standardize decisions, reducing the role of human discretion. Meanwhile, countries such as Canada and the UK have implemented rules allowing AI in judicial administration, but not formal judicial decision-making.
Automation in government is often easier to deploy in cities and smaller states, and Estonia stands out as one of the most automated countries in the world. Estonia has also begun extending automation into the judiciary, including AI-assisted judges for small claims disputes. The e-Estonia platform delivers state benefits, such as parental support, often without citizens applying for it. As Estonian Prime Minister Kristen Michal described it, these AI systems “are predictive, personalized, and proactive.”
Understanding the Risks
AI-driven governance is closely tied to several initiatives like Smart Cities, 15-minute cities, and various forms of social credit systems, where public infrastructure, services, surveillance, and administration are integrated through automated management. In 2025, Palantir CEO Alex Karp and the head of corporate affairs and legal counsel to the office of the CEO, Nicholas W. Zamiska, endorsed closer integration between Silicon Valley and the state in their book, The Technological Republic.
While the administrative state may continue shrinking its workforce, the automated and potentially autonomous interface replacing it will make the government structure far larger and more intrusive. Handing off public authority to private firms providing the underlying technology, alongside decisions being made by opaque algorithmic processes instead of identifiable officials, has also made populations uneasy. A 2025 Cornell Brooks Public Policy article reveals mixed support in the U.S. for the use of AI in government overall, and lower acceptance when used in high-stakes decisions.
The same tools being developed to manage society can also be turned against it by other actors. In 2025, Anthropic stated that a likely Chinese state-sponsored actor used its Claude agentic AI to attempt infiltration into 30 targets worldwide, including tech companies, government agencies, chemical manufacturing companies, and financial institutions, succeeding in several cases. The company described it as the “first documented case of a large-scale cyberattack executed without substantial human intervention.”
Administrative failures caused by automation have also created serious problems for years. In the Netherlands, a self-learning system used by the Dutch Tax and Customs Administration wrongfully penalized thousands of families, many from marginalized communities, driving some into financial ruin and even loss of child custody.
In 2016, Arkansas automated Medicaid care assessments through a third-party contractor, abruptly cutting support for vulnerable recipients and triggering federal court challenges. The Department of Homeland Security has also repeatedly misidentified individuals through automated screening systems, preventing some from traveling. In Colorado in 2020, an automatic license plate reader falsely flagged a car as stolen, leading police to hold an innocent mother and her children at gunpoint.
Whatever rules are built into automated systems can also standardize decisions in ways that strip context. Research from a Technical University of Munich project on algorithmic governance notes that the “heuristic judgments” or “rules of thumb” reduce complex decisions into simpler standard calculations. As reliance on “algorithmic truth” grows, human judgment and deeper reasoning risk being sidelined by streamlined decisions that appear fairer.
Automation similarly expands the potential for more powerful censorship models and political manipulation. Embracing automated and autonomous governance also means surrendering part of the human role in self-government. Collective governance, grounded in public debate and access to accountable officials, will give way to structures that are harder to question or fully understand.
Regulation for New Governance
Please first to comment
Related Post
Stay Connected
Tweets by elonmuskTo get the latest tweets please make sure you are logged in on X on this browser.
Energy




