I. Introduction
Twenty years ago, the phrase “private military and security company” (PMSC) summoned a particular kind of visual. A sand-choked boulevard in Baghdad, the midday sun sharp against concrete blast walls, the hum of idling armored SUVs thick in the air, and a convoy of men in tactical black. These men were not soldiers. They wore no flags. Answered to no brigadier. Their faces obscured by wraparound sunglasses. Weapons slung low. Radios crackling. They moved with authority—yet not the kind conferred by oath or insignia. Rather, one conferred by contract.
The name on everyone’s lips—the most notorious emblem of that world—was Blackwater. Founded by former Navy SEAL Erik Prince and flush with government contracts, the company quickly became the go-to provider for a wide range of outsourced military functions: ferrying diplomats through insurgent strongholds; securing the perimeters of embassies with ex-special forces operatives; training Iraqi police units in small arms tactics and counterinsurgency.
Then came Nisour Square. In September 2007, four Blackwater guards opened fire killing seventeen Iraqi civilians, including women and children, and injuring many others. in what witnesses described as an unprovoked assault. The incident sent diplomatic shockwaves. Civil lawsuits and criminal lawsuits followed. Congressional hearings were convened. Years later, convictions were handed down, then partially erased by presidential pardon from Trump at the end of his first term. But something deeper had already been exposed: that the post-9/11 era had quietly birthed a privatized architecture of violence, operating in the penumbra of law. As Harvard Professor Martha Minnow described, it marked “a new degree of privatization” and a “dangerous challenge to the aspirations of order in the world.”
That challenge is even more profound today, as the PMSC sector has undergone a profound technological metamorphosis in the past twenty years. Firms once known for boots-on-the-ground operations now offer end-to-end security ecosystems—integrating AI-powered surveillance, facial recognition, behavioral analytics, predictive policing algorithms, and biometric identification tools into their service portfolios. One need only look to Anduril Industries—a U.S.-based defense technology firm founded by Silicon Valley engineers and former military operatives—to grasp the scale of this shift. The company develops autonomous surveillance towers, sensor-laden autonomous drones and autonomous underwater systems, and is now even helping integrate augmented-reality headsets for frontline troops. At the heart of its operations lies the Lattice platform—a powerful dual-use AI-enabled operating system that fuses sensor inputs across domains, enabling automated threat detection, intelligence analysis, and response and mitigation. Anduril has also expanded into cybersecurity, partnering with Riverside Research—under a DARPA initiative—to harden critical systems against digital threats, further reflecting the sector’s growing convergence of kinetic and cyber defense. Together, these shift mark new categorical challenges for international human rights law (IHRL) and international humanitarian law (IHL) protections.
II. The International Code of Conduct as a Model of Institutional Imagination
The International Code of Conduct for Private Security Service Providers (the Code) emerged in November 2010 as a landmark attempt to impose normative order on a chaotic sector. Its seventy provisions addressed the conduct of personnel, the use of force and firearms, detention practices, incident reporting, and internal grievance procedures—grounding much of its orders in the language of both human rights law and humanitarian law. But the Code was more than a checklist of operational safeguards. It marked an attempt to reassert law’s relevance in a space where contractual relationships had long displaced public obligations. Cedric Ryngaert once described the Code as an experiment in “the re-entry of the state” into a domain of “stateless law,” one achieved by way of public procurement policies that aim to reward human rights respecting business initiatives.
Yet the very features that made the Code possible—its voluntarism, its multi-stakeholder architecture, its reliance on reputational enforcement—also circumscribed its authority. Take for example, the International Code of Conduct Association (ICoCA), the oversight body established to give effect to the Code’s commitments. Tasked with certifying companies, issuing guidance, monitoring compliance, and receiving grievances, ICoCA nonetheless operates under a constrained mandate: as a soft law mechanism it remains dependent on states to integrate its standards into domestic regulatory frameworks, as it lacks investigatory subpoena powers and offers no legally binding dispute resolution mechanism. Its core sanction is one of expulsion, which depends too on States incorporating membership in ICoCA into their national tenders for procurement, so to ensure the sanction’s effectiveness. With such limitations critics have rightly questioned whether the market structure and incentivizes are there, for ICoCA—on its own—to produce more than symbolic accountability.
Yet to focus on the Code and ICoCA’s limitations, is to overlook their deeper significance. Indeed, as I have written elsewhere, “[t]he lessons learned from regulating PMSCs through international standards, oversight mechanisms, and multistakeholder engagement can be adapted and applied” to address other evolving commercial, technological security concerns. In a geopolitical moment increasingly defined by the retrenchment of rights-based multilateralism, ICoCA remains one of the few surviving examples of pluralistic norm entrepreneurship—a testament to what can be achieved when states, corporations, and civil society aspire to act in concert.
In other words, for all its varied limitations, the Code and ICoCA endure as a model of institutional imagination. Or do they? As Vincent Bernard, a former Senior Policy Advisor at ICoCA wrote, now is the time “to revisit, interpret, and perhaps adapt the existing instruments of regulation and governance of private security.” This blog post focuses on one such aspect: exploring whether the Code endures in the face of technological evolution. Indeed, ICoCA’s Strategic Plan for 2024-2030 calls in Strategic Goal 4 for the integration into the Code of human rights standards relating to the incorporation of new technologies. This short blog post identifies three crucial steps that must be taken in order to succeed in achieving this ambitious goal.
III. Step 1: Reconceptualizing Security Services
The current definition of “security services” under the Code is increasingly misaligned with the technological realities of the sector it purports to regulate. While the Code commendably encompasses “operational and logistical support for armed or security forces” including “intelligence, surveillance, and reconnaissance activities,” it remains largely tethered to a kinetic paradigm of risk—one centered on the physical presence of armed personnel. This framing no longer captures the breadth of commercial actors whose products and services shape security outcomes in the digital age. The datafication of armed conflict and humanitarian response has resulted in infrastructural providers—cloud platforms, satellite operators, data centers, and encryption firms—being invited to construct the technological scaffolding necessary for modern military surveillance and targeting regimes. Cybersecurity firms, too, now play both offensive and defensive roles in intelligence-gathering and information operations, sometimes with direct implications for the conduct of hostilities. The result is an expanding perimeter of corporate and commercial actors whose participation in armed conflict is indirect, but no less consequential.
Consider Anduril. Even if some of its business lines may be considered “operational and logistical support”—thus falling within the Code’s existing ambit—a substantial grey zone remains, particularly around dual-use technologies, databases, and services that transition seamlessly between commercial, law enforcement, and military functions. Other Infrastructural providers, are even less likely to be considered as offering direct “operational and logistical support.” The Code must therefore move beyond its legacy understanding of “security services” and embrace a functional definition rooted in effects rather than form (the definitional model offered by the U.S. Department of State 2020 Guidance on the implementation of the UN Guiding Principles for transactions relating to products or services with surveillance capabilities, offers a potential starting point).
IV. Step 2: Introducing Obligations at the Design Phase
Article 25 of the Code requires Member and Affiliate Companies “to take reasonable steps to ensure that the goods and services they provide are not used to violate” IHRL or IHL, and that “such goods and services are not derived from such violations.” (emphasis added). But this use-based framing presumes a linear chain of causation between a deployed technology and a subsequent legal breach. That presumption fails to account for the layered and cumulative nature of digital systems, where critical decisions are made not at the moment of use, but at the point of design. As I have argued elsewhere, surveillance, cyber, and AI tools “inevitably involve thousands of design choices, both minor and significant, that hardcode policy rationales, legal interpretations, and value judgments into their hardware, software, and user interfaces.” These embedded decisions shape how a tool will operate under battlefield conditions—and, more troublingly, whether it can be audited or constrained when it veers off course.
If, as Rebecca Crootof and BJ Ard have suggested, technology “regulates through its ‘architecture’,” then the Code must shift upstream. It should impose obligations not merely on how technologies are used, but how they are conceived, developed, and trained throughout the lifecycle of a product or service. The existing text already offers a subtle foothold for such an expansion. By prohibiting goods and services “derived from” violations, Article 25 of the Code leaves open the possibility of regulating tools whose algorithms evolve through data gathered in unlawful ways. In other words, the Code might already justify algorithmic and model disgorgement as a remedy. In a world where machine learning systems continuously refine themselves mid-conflict, waiting until the moment of use may be too late. The Code must be reinterpreted to account for the reality that harms can be hardwired before a product ever ships.
V. Step 3: Reinvigorating Digital Rights Protection in Times of Armed Conflict
Article 25 of the Code presupposes a level of doctrinal clarity as to what constitutes a violation of IHRL and IHL. Yet in the context of digital operations—automated decision making, data extraction, algorithmic targeting, biometric surveillance, and information warfare—such clarity remains elusive. As I have long argued, the treatises of IHL are mostly silent on issues of informational privacy, data protection, or cybersecurity. Key IHL concepts such as “attacks” or “military operations” struggle to accommodate the sprawling architecture of digitized conflict.
The First Additional Protocol to the Geneva Conventions requires under Article 36, that legal reviews be carried out in the study, development, and acquisition of new weapons, means, or methods of warfare. Article 56 of the Code similarly demands authorizations prior to possession and use of any new “weapons and ammunition.” Yet, as others have written, Article 36 reviews struggle to keep pace with dual-use software and autonomous decision-support systems. In particular, whether such tools are even weapons, means, or methods subject for review, and how precisely they should be reviewed for digital rights protection, assuming they were.
IHRL, meanwhile, is equally limited. Many of its core protections—against arbitrary interference with privacy, for example—are subject to national security limitations or emergency derogations. Of note, Article 23 of the Code forbids invoking such exceptions to justify violations of the UN Charter or to commit domestic or international crimes. But the Article stops short of introducing a broader prohibition. In other words, national security exceptionalism continues to serve a legitimate justification for corporate activity that ultimately harms digital rights. In a landscape where data has become both a target and a weapon, we urgently need to reconceptualize what digital rights protection even entails and what the outer limits of existing IHRL are—not only for states and their militaries, but also for the private actors they enlist. The Code equally could benefit from such doctrinal elucidation.
VI. Conclusion
The International Code of Conduct begins from the premise that privatization of security is a reality to be managed, not resisted. It assumes that military and security outsourcing are inevitable and seeks to constrain harm through industry standards, procurement policies, and reputation-oriented oversight mechanisms.
But that premise itself deserves interrogation. Not all the functions of the state are delegable—nor should they be. The March 2025 Revised Fourth Draft Instrument of the UN open-ended intergovernmental working group on PMSCs attempts to draw that line. It identifies as “prohibited activities” the contracting out of core sovereign powers, including the engagement in combat operations, detention, and interrogation. Yet, earlier articulations of these prohibited activities—then called “inherently state functions”—further encompassed intelligence collection, the wielding of police powers, and the transfer of military knowledge as non-delegable acts.
Today, those very activities are increasingly mediated by code—designed, maintained, and sometimes even deployed by PMSCs whose incentives and accountabilities differ radically from those of the state. As commercial actors move deeper into the heart of military decision-making, battlefield awareness, and policing and security, the time has come to ask not only how such activities are to be regulated within a possible Code of Conduct, but whether they should be outsourced at all.
This contribution is part of a forthcoming Symposium on The Business of Security: New Frontiers and Old Challenges in PMSC Regulation.
The views and opinions presented in this article belong solely to the author and do not necessarily represent the stance of the International Code of Conduct Association (ICoCA).