Statement on Box III, Paragraph 6

2025 Group of Governmental Experts on Lethal Autonomous Weapon Systems
Second Session (Geneva, 1–5 September 2025)

Asia-Pacific Institute for Law and Security

Thank you, Chair!

We have listened with interest to the many detailed suggestions concerning Box III paragraphs 6. These already give you a lot of material to work with. But with your indulgence, we would add a few comments.

Subparagraph A contains a number of concepts that many delegations have recognised as important. Some of these concepts have also been reflected in national and international documents relating to the use of military artificial intelligence. A number of delegations have, however, pointed out that the terms predictable, reliable, traceable and explainable do not form part of the existing body of IHL, and do not necessarily have widely accepted meanings.

We would like to point out that the substance of many of these concepts has already been incorporated into the rolling text.

With respect to predictability, the next box, Box IV, refers in paragraphs 3 and 4 to the anticipated effects of LAWS, implying the ability to anticipate such effects. Also, the language currently in Box III paragraph 4, which we have suggested including as a subparagraph here in paragraph 6, refers to the ability to anticipate and control the effects of LAWS.

As for reliability, Box IV paragraph 5 refers to testing and evaluation that would enable a human user to have a reliable expectation of the performance of a LAWS. Box IV paragraphs 6 and 7 address biases, which are also connected to the reliability of a system.

When it comes to explainability, Box IV paragraph 3 already sets the expectation that the capabilities, limitations and effects of LAWS are understood.

Chair,

Traceability is a more difficult concept as it has multiple meanings. It sometimes refers to the need for transparency and auditability in the design methodologies, data sources and documentation. But sometimes it refers to the need to link the effects of a system to a human user. We suggest that these are issues that should be addressed, as appropriate, in Boxes IV and V.

We stand ready to assist the Chair in finding language that would more comprehensively address the concepts of predictability, reliability, traceability and explainability across the text. But we would caution against incorporating these terms or labels without any further explanation or definition in subparagraph A. 

Chair,

In subparagraphs C and D, we do not think it is obvious what “the scale of the operations” means. The text does not make it clear how “the scale of operations” is distinct from the range of targets, the duration, and the geographical scope of the operations. 

If the “scale of operations” refers, for example, to the number of targets, or the number of weapons forming part of the weapon systems, this should be made clear in the text. 

Also, we are concerned that the idea that mission parameters “cannot be modified by the system” tend to anthropomorphise the system. That problem can be easily avoided by simply saying that the mission parameters “cannot change” without context-appropriate human judgement and control.

I thank you, Chair!