First 2025 Session of the Group of Governmental Experts on Lethal Autonomous Weapon Systems
Thank you, chair!
First of all, many thanks to you, Chair, your team, and the Friends of the Chair, for all the work on this revised text in Box III.
Paragraph 1 is a concise reflection of IHL in relation to inherently unlawful weapons. Addressing here weapons that are “otherwise incapable of being used in compliance with IHL” is appropriate for completeness. This change also avoids the circularity of the earlier paragraph 1, which has now been deleted.
Paragraph 2 is workable as it currently stands. However, as Switzerland and the ICRC have pointed out a few moments ago, there are other rules and principles of IHL that are relevant to the use of LAWS, but that are not captured here. We therefore support the proposal of Switzerland in this regard.
Paragraph 3 is unproblematic from our perspective. The text mirrors—wisely, we think—existing law, notably the phrasing of Articles 51(2) and 52(1) of Additional Protocol I.
Chair,
Paragraph 4 in our view risks conflating an existing legal obligation and the conditions or measures necessary to comply with that obligation. IHL prohibits the use of weapons the effects of which cannot be limited as required by law. This prohibition is reflected in Article 51(4)(c) of Additional Protocol I. Admittedly, compliance with this rule presumes the ability to anticipate and control the effects of the weapon, but this is not the direct requirement of the rule in question.
We suggest deleting this paragraph and reflecting its substance elsewhere. Specifically, a sentence could be added at the end of paragraph 1, which already reflects the underlying prohibition of indiscriminate weapons. This additional sentence could read: “This includes weapons the effects of which cannot be limited as required by IHL.” Additionally, a new subparagraph could be added to paragraph 6, which deals with practical compliance measures. This sub-paragraph could read: “ensure that they can sufficiently anticipate and control the effects of LAWS”.
Chair,
Paragraph 5 presents some challenges. For one, it introduces an entirely new prohibition based on the lack of “context-appropriate human control and judgement”. We share the concerns already expressed by other delegations about whether this notion is clear enough, and sufficiently widely understood, to work as a standard for a prohibition. But this is fundamentally a policy question for representatives of HCPs to address.
In any event, compliance with the standard of “context-appropriate human control and judgement” seems to be highly context-dependent. This would make it challenging to assess, during a legal review, whether a new LAWS would fall short of this standard or not. Thus, if this paragraph is retained, it might be more productive to reformulate it as a positive obligation to ensure context-appropriate human control and judgement to the extent required for compliance with IHL.
Chair,
Turning now to the quite complex paragraph 6, we will limit ourselves to just one suggestion.
Subparagraph A lists a number of principles on the responsible use of artificial intelligence, which have been reflected in various international and national policy instruments. Therefore, reflecting them as elements of a possible instrument seems appropriate. However, it would be more helpful and transparent not to use labels, which can be subject to divergent interpretations, but to actually formulate these principles as separate subparagraphs.
For examples, instead of saying that LAWS must be, inter alia, reliable, a separate subparagraph could be inserted along the following lines:
Ensure that LAWS have explicit, well-defined uses, and the safety, security, and effectiveness of LAWS will be subject to testing and assurance within those defined uses across their entire life-cycles.
The same could be done with predictable, traceable and explainable. We are happy to propose detailed language if that would assist the Chair.
By unpacking these notions, the group could have a more detailed discussion about their substance. The group could then also have an exchange about whether they are suitable as elements of an instrument, and where precisely they should be placed structurally. For example, traceability and explainability might be more relevant to the Box V, which deals with accountability and responsibility.
In paragraph 7, the use of the phrase “cannot be modified by the system” may be read as giving some agency to the system itself. Perhaps it would be better to say “do not change”. Thus the paragraph would read:
Ensure that LAWS’ mission parameters with regard to the target selection and engagement do not change without context-appropriate human control and judgement.
I thank you, Chair!