Regulating the Use of Autonomous Weapon Systems
Mayuri Mukherjee, Consultant, VIF

Instead of worrying about technical definitions, focus on how humans can retain control over their machines

Terminators running amok; the wanton killing of human beings; the end of civilisation as we know it; these are the sort of images that come to mind when the word ‘killer robots’ is used. But the term is really just a shorthand for autonomous weapon systems (AWS) or, more broadly, weapons that have some sort of artificial intelligence (AI) capabilities. Think of the US’s armed Predator drones, Israel’s Guardium, the unmanned ground vehicle that patrolled the Gaza border, or even Russia’s Poseidon, a nuclear-armed unmanned underwater vehicle that’s currently under development. Depending how you understand the concept of autonomy (even a bow and arrow system has some autonomy), AWS can refer to a wide spectrum of defence systems, some of which have been used in battlefields for decades, such as drones, while others are still a figment of imagination, such as killer robots.

Given this basic conceptual problem, it is not surprising that the states at the UN have had a tough time coming up with a concrete plan to regulate, let alone ban, such systems. Most recently in November 2019, states failed yet again to agree on whether to begin a formal deliberative process that would eventually produce a legal instrument, regulating the development and deployment of autonomous weapons. This was the fifth consecutive year that states that are party to the UN Convention on Certain Conventional Weapons (CCW) discussed the issue, and they are scheduled to continue doing so through 2020 and the year after.

Notably, at the November meeting, the states endorsed a set of guiding principles for autonomous weapons--they agreed that autonomous weapons must always be subject to human responsibility and operate within the framework of international humanitarian law--but they made little progress towards a binding treaty-like arrangement. Worse still, there is enough reason to argue that the international community will remain stuck at this point as long as it continues with the current structure and framework of the debate.

The current discourse on autonomous weapons began in 2012 when the US-based advocacy group Human Rights Watch and the International Human Rights Clinic at Harvard Law School published a report calling for a blanket ban on all fully autonomous weapon systems. The report received a lot of publicity, its message was amplified by the Campaign to Stop Killer Robots, and within two years, the UN began informal discussions under the CCW.

The initial focus was on a blanket ban on autonomous weapons but thankfully, that has been set aside. The technologies underlying these weapons are simply too diffuse and too easily available for the ban to succeed, as Chris Jenks at the SMU Dedman School of Law points out. The focus now is on regulation. However, even though all parties now agree on the basic principles of using autonomous weapons, there is still along way to go before they can get on the same page.

For one, many of the states leading the development of these technologies and weapon systems, such as the US, Russia and China, and those with large defence-tech industries, such as Israel, either oppose regulation or have expressed concern that regulation may stifle innovation. Moreover, the states still have not been able to agree on one single definition for autonomous weapons. Also, there isn’t enough clarity on what the concept of ‘meaningful human control’ over the entire life cycle of an autonomous weapon system should look like—even though it is this concept that lies at the core of the aforementioned guiding principles.

As a result of these myriad roadblocks--some political, some ideological, and some conceptual--the CCW process around the governance of autonomous weapons risks running itself aground. To change course, it needs to go back a few steps and ask the most fundamental question: why do autonomous weapons need to be regulated? The answer is simple: autonomous weapons must be regulated so that their use does not violate the established principles of international humanitarian law (IHL). This, in turn, necessarily requires that humans retain control over their machines. If humans retain control, the main issue under the IHL rubric—accountability for the actions of an autonomous weapon-- is solved.

Accountability is the opposite side of the autonomy coin. As Rebecca Crootof at the University of Richmond explains, the more autonomous the weapon system, the harder it is to pin responsibility for its ‘actions’ on a human being. The situation is further complicated when, as George Washington University’s Laura Dickinson shows, multiple actors ‘use’ the weapon--working together to gather intelligence, deciding on a target, and deploying a weapon, resulting in a diffused decision-making process. However, if there is proper human control over the weapon, then in case of failure, the human operator can be held accountable

This is the point on which experts suggest the CCW should focus. Earlier this month, International Panel on Regulation of Autonomous Weapons (iPRAW), an independent group of scientists working with the UN on autonomous weapons, published a report recommending that the principle of human control be operationalized into a more concrete norm. The report listed three steps: “(1) abandon the distracting debates about a technical definition of LAWS, (2) focus on the human element (e.g. human control), and (3) analyse the impact of the operational context on the necessary level of human involvement.” These form an excellent ideational framework for the reasons. The concept is tech-agnostic and, therefore, in a sense, future proof. The focus on human control means that the framework deals not with the form of the weapon but its function. Member states at the UN CCW will do well to seriously consider these recommendations.

  1. Report of the 2019 session of the Group of Governmental Experts on emerging technologies in the area of lethal autonomous weapons systems. Available at
  2. Crootof, R. (2016). “War Torts: Accountability for Autonomous Weapons”, University of Pennsylvania Law Review 1347–402.
  3. Dickinson, L. (2018). Drones, Automated Weapons, and Private Military Contractors. In M. Land & J. Aronson (Eds.), New Technologies for Human Rights Law and Practice (pp. 93-124). Cambridge: Cambridge University Press. doi:10.1017/9781316838952.005
  4. Human Rights Watch and Harvard Law School International Human Rights Clinic (IHCR), “Losing Humanity: The Case Against Killer Robots,” Human Rights Watch, November 2012, available at
  5. iPRAW. (2020). A Path towards the Regulation of LAWS. Retrieved from
  6. Jenks, C. (2016). False rubicons, moral panic, conceptual cul-de-sacs: Critiquing reframing the call to ban lethal autonomous weapons. Pepperdine Law Review, 44(1), 1-70.

(The paper is the author’s individual scholastic articulation. The author certifies that the article/paper is original in content, unpublished and it has not been submitted for publication/web upload elsewhere, and that the facts and figures quoted are duly referenced, as needed, and are believed to be correct). (The paper does not necessarily represent the organisational stance... More >>

Image Source:

Post new comment

The content of this field is kept private and will not be shown publicly.
6 + 1 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.
Contact Us