How Can We Talk About Autonomous Weapons?

Experts convened by the IEEE Standards Association seek your help

4 min read
Abstract illustration of overlapping circles and squares
ssnjaytuturkhi/Getty Images

This article is part of our Autonomous Weapons Challenges series. The IEEE Standards Association is looking for your feedback on this topic, and invites you to answer these questions.

Lethal autonomous weapons systems can sound terrifying, but autonomy in weapons systems is far more nuanced and complicated than a simple debate between “good or bad” and “ethical or unethical.” In order to address the legal and ethical issues that an autonomous weapons system (AWS) can raise, it’s important to look at the many technical challenges that arise along the full spectrum of autonomy. A group of experts convened by the IEEE Standards Association is working on this, but they need your help.

Weapons systems can be built with a range of autonomous capabilities. They might be self-driving tanks, surveillance drones with AI-enabled image recognition, unmanned underwater vehicles that operate in swarms, loitering munitions with advanced target recognition—the list goes on. Some autonomous capabilities are less controversial, while others trigger intense debate over the legality and ethics of the capability. Some capabilities have existed for decades, while others are still hypothetical and may never be developed.

All of this can make autonomous weapons systems difficult to talk about, and doing so has proven to be incredibly challenging over the years. Answering even the most seemingly straightforward questions, such as whether an AWS is lethal or not, can get surprisingly complicated.


To date, international discussions have largely focused on the legal, ethical, and moral issues that arise with the prospect of lethal AWS, with limited consideration of the technical challenges. At the United Nations, these discussions have taken place within the Convention on Certain Conventional Weapons. After nearly a decade, though, the U.N. has yet to come up with a new treaty or regulations to cover AWS. In early discussions at the CCW and other international forums, participants often talked past each other: One person might consider a “fully autonomous weapons system” to include capabilities that are only slightly more advanced than today’s drones, while another might use the term as a synonym for the Terminator.

Discussions advanced to the point that in 2019, member states at the CCW agreed on a set of 11 guiding principles regarding lethal AWS. But these principles are nonbinding, and it’s unclear how the technical community can implement them. At the most recent meeting of the CCW in July, delegates repeatedly pushed for more nuanced discussions and understanding of the various technical issues that arise throughout the life cycle of an AWS.

To help bring clarity to these and other discussions, the IEEE Standards Association convened an expert group in 2020, to consider the ethical and technical challenges of translating AWS principles into practice and what that might mean for future development and governance.

Last year, the expert group, which I lead, published its findings in a report entitled “ Ethical and Technical Challenges in the Development, Use, and Governance of Autonomous Weapons Systems.” In the document, we identified over 60 challenges of autonomous weapons systems, organized into 10 categories:

  1. Establishing common language
  2. Enabling effective human control
  3. Determining legal obligations
  4. Ensuring robustness
  5. Testing and evaluating
  6. Assessing risk
  7. Addressing operational constraints
  8. Collecting and curating data
  9. Aligning procurement practices
  10. Addressing nonmilitary use

It’s not surprising that “establishing common language” is the first category. As mentioned, when the debates around AWS first began, the focus was on lethal autonomous weapons systems, and that’s often still where people focus. Yet determining whether or not an AWS is lethal turns out to be harder than one might expect.

Consider a drone that does autonomous surveillance and carries a remote-controlled weapon. It uses artificial intelligence to navigate to and identify targets, while a human makes the final decision about whether or not to launch an attack. Just the fact that the weapon and autonomous capabilities are within the same system suggests this could be considered a lethal AWS.

Additionally, a human may not be capable of monitoring all of the data the drone is collecting in real time in order to identify and verify the target, or the human may over-trust the system (a common problem when humans work with machines). Even if the human makes the decision to launch an attack against the target that the AWS has identified, it’s not clear how much “meaningful control” the human truly has. (“Meaningful human control” is another phrase that has been hotly debated.)

This problem of definitions isn’t just an issue that comes up when policymakers at the U.N. discuss AWS. AI developers also have different definitions for commonly used concepts, including “bias,” “transparency,” “trust,” “autonomy,” and “artificial intelligence.” In many instances, the ultimate question may not be, Can we establish technical definitions for these terms? but rather, How do we address the fact that there may never be consistent definitions and agreement on these terms? Because, of course, one of the most important questions for all of the AWS challenges is not whether we technically can address this, but even if there is a technical solution, should we build and deploy the system?

Identifying the challenges was just the first stage of the work for the IEEE-SA expert group. We also concluded that there are three critical perspectives from which a new group of experts will be considering these challenges in more depth:

  • Assurance and safety, which looks at the technical challenges of ensuring the system behaves the way it’s expected to.
  • Human–machine teaming, which considers how the human and the machine will interact to enable reasonable and realistic human control, responsibility, and accountability.
  • Law, policy, and ethics, which considers the legal, political, and ethical implications of the issues raised throughout the Challenges document.

What Do You Think?

This is where we want your feedback! Many of the AWS challenges are similar to those arising in other fields that are developing autonomous systems. We expect and hope that IEEE members and readers of IEEE Spectrum will have insights from their own fields that can inform the discussion around AWS technologies.

We’ve put together a series of questions in the Challenges document that we hope you’ll answer, to help us better understand how people in other fields are addressing these issues. Autonomous capabilities will increasingly be applied to weapons systems, much as they are being applied in other realms, and we hope that by looking at the challenges in more detail, we can help establish effective technical solutions, while contributing to discussions about what can and should be legally acceptable. Your feedback will help us move toward this ultimate goal. Public comments will be open through 7 December 2022.

The independent group of experts who authored the report for the IEEE Standards Associate includes Emmanuel Bloch, Ariel Conn, Denise Garcia, Amandeep Gill, Ashley Llorens, Mart Noorma, and Heather Roff.

The Conversation (4)
Tom Craver
Tom Craver10 Nov, 2022
INDV

Humans that choose to use a weapon are responsible for the outcome of an attack, period. _____

If AWS are useful in war, they will NOT be banned. We don't ban truly useful weapons, because we expect the other side will cheat and produce them anyway to gain an advantage - because we would. ______

Instead focus on encouraging less-lethal AWS when killing is unnecessary. Drug darts for targets that might be civilians, limb wounding for clear combatants, precision force to disable enemy vehicles, etc.

R K Rannow
R K Rannow08 Nov, 2022
M

All is fair in love and war, and then there is the unpredictability of behaviour of autonomous platforms. Perhaps the dynamic operational context of war makes it extremely difficult, with any meaningful confidence, to predict the possible outcome of AWS in real-world deployments, even if the 10 listed items are addressed.

Keith Kumm
Keith Kumm08 Nov, 2022
LM

Q: Has Group considered "limits on firing an AWS weapon"? This is the flipside of "criteria for enable and fire." A good system needs limits, particularly in autonomy functions. Limits may be appropriate or inappropriate, weak or strong, depending on standards among other factors. Standards might form a foundation, atop which other factors can be applied. One may hesitate to call it a "judgement" faculty of an AWS. But judgement is a core faculty atop a human chain of command, sometimes even lower down. The top of the chain for AWS in an uncertain environment may require something like "judgement."