The second international summit on Responsible Artificial Intelligence in the Military Domain (REAIM) took place in Seoul on 9 and 10 September 2024, some 18 months after the inaugural summit, held in Netherlands in February 2023. This time, Kenya, Singapore and the United Kingdom joined the event’s original founders, Netherlands and South Korea, as co-hosts. The Summit itself followed a very similar format to the first extravaganza in the Hague, minus the smoke machines and actors during the opening session. Instead, to kick off the proceedings, the Summit’s delegates heard from the Youth and Peace in the Age of AI side event representatives about why the regulation of military AI is a pressing concern for the international community.
The Summit was a reunion for many delegates who had attended the latest meeting of the Group of Governmental Experts on Lethal Autonomous Weapons Systems (GGE LAWS) in Geneva only two weeks earlier. REAIM covered much of the same ground – albeit with an ostensibly broader focus on the governance of military AI, rather than just the regulation of autonomous weapon systems. Participation in the 2024 REAIM Summit again included a variety of stakeholders, with over 2,000 participants from governments, international organisations, civil society, academia and industry. The presence of the military industry participants was more keenly felt at this Summit. The capability demonstration stands set up directly opposite the registration desk gave the event a slight trade-show flavour.
Australian participation in the Summit was significant. The Australian Government delegation included representatives from the Department of Foreign Affairs and Trade and the Department of Defence. It was led by Vanessa Wood, the Ambassador for Arms Control and Counter-Proliferation, which testifies to the importance of the topic at hand for Australia. Australian universities, notably the Australian National University and the University of Queensland, were also represented with speakers during multiple breakout sessions. Australia also has strong representation on the closely related Global Commission for Responsible AI in the Military Domain (more on which later).
The State of the Debate
While the sheer number of plenary and breakout sessions at REAIM makes it difficult to offer a comprehensive summary, the discussions during this Summit revealed a couple of things.
First of all, opinions continue to diverge on the likelihood and modalities of further regulation of military AI. The optimists cited ‘successes’ in responding to other international crises – such as saving the ozone layer, and agreeing on keeping space and Antarctica demilitarised – as examples of models addressing the crisis of unchecked military AI capabilities. Others, however, were less sanguine about the likelihood of comprehensive regulation, suggesting that success lies in gradual change, for example in the conclusion of regional agreements that could scaffold future universal arrangements.
At the same time, many interventions at the Summit demonstrated there has been some progress in finding ways to operationalise the myriad of ‘principles’ that are evolving with respect to military AI. The presentations at the Summit revealed the proliferation of ‘toolkits’ and risk assessment tools created by states, civil society and defence industry, seeking to put into practice the values discussed as central to the responsible use of military AI in 2023.
Regrettably, the conceptual and definitional difficulties continue to stall progress in other international discourse on military AI – ranging from questions around what military AI capability require enhanced governance, what constitute a ‘use case’, and what technical language should be used to describe the capabilities. At REAIM, there was a broader focus on AI as an enabling capability in the military domain generally, rather than autonomous functionality in weapon systems. In this sense, the definitional problems that this year again flared up at GGE LAWS and highlighted the need for continued dialogue about this challenge, were less of an obstacle to progressing debate about what tangible steps could be taken to ensure responsible use of military AI.
The focus of government delegates in their interventions at the Summit seemed to be on risk mitigation and confidence building measures (CBM). Indeed, multiple state representatives suggested that a universalisation of risk assessment processes and mandated risk modelling might be underpin a regulatory solution for military AI. Many stakeholders raised CBMs – such as the legal review of AI-enabled capabilities, the creation of hotlines between states (particularly between AI hegemons), ongoing dialogue, and enhancing transparency about military uses of AI – as elements of a governance framework.
The issues of legal and ethical compliance unsurprisingly loomed large in both plenary and breakout sessions across this Summit. They were coupled with the recognition that heightening global military and geostrategic tensions since February 2023 create further challenges for enhancing global governance. This observation turned conversations to considering how to learn from previous successes in responding to similarly challenging international problems. Some spoke of the risks of AI and opportunities in regulating it. Others focused on seeming inevitability of the use of this technology and the need to embrace dialogue as a method to enhance transparency and to reduce tensions.
Unlike at the first Summit, the unequal distribution of compute technology garnered considerable attention this year. Several problems were identified. First, regulating military AI globally remains difficult without less developed states accessing and understanding the technology so as to engage in discussions about its regulation. Second, since AI is an enabling capability, pursuing its strict regulation carries a real risk to the prosperity and development of some nations, who might otherwise be able to benefit from its economic and social promise. Third, the widespread use of AI raises concerns about job security and the risk of driving migration. Fourth, the energy intensity of AI is a factor in tackling climate change.
All of these issues were discussed as considerations that must be included in the responsible military AI debate. This reflects an important development from the inaugural Summit and demonstrates that the deeper, longer-term effects of this technology are now being actively considered by the global community.
The Global Commission
Further developments since the inaugural Summit include the creation of the Global Commission on Responsible Military Artificial Intelligence (GC REAIM), made up of a diverse group of commissioners and expert advisors. The healthy number of Australian members of GC REAIM testifies to the depth and breadth of expertise – and, indeed, the diversity of views – that can be found in Australia on the governance of military AI.
During the Summit, it was announced Kersti Kaljulaid, former President of Estonia, would chair GC REAIM, with Byung-se Yun, former Foreign Minister of South Korea serving as co-chair, as it works towards producing a report by the end of 2025. It remains to be seen exactly what role GC REAIM will play in the emerging debate on responsible military AI, as it approaches the half-way mark of its initial two-year mandate.
Outcomes
The 2023 REAIM Summit resulted in a Call to Action, which was endorsed by 57 countries and territories. A US-led Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, launched at the same time, has gained a similar number of endorsing States.
The 2024 REAIM Summit culminated in a Blueprint for Action, which purported to address issues identified in 2023 and to list concrete steps for States to address these issues. Of the states participating, with 61 formally supported the Blueprint while reportedly 30 have so far declined to sign on. China – notable as one of the globe’s key developers of military and general AI capabilities – did not endorse the Blueprint despite attendance at the Summit, reportedly because the Blueprint deems it to be ‘especially crucial to maintain human control and involvement for all actions critical to informing and executing sovereign decisions concerning nuclear weapons employment’ (para 5).
The Summit certainly succeeded in raising the profile of the issues discussed by involving high-level government representatives, including several ministers. Also, it demonstrated modest progress in the international discourse about what responsible AI is, and how it can be achieved, albeit with some uncertainty and a definite lack of universality in its terms. The Summit resulted in meaningful dialogue across multiple stakeholders about what responsible use of military AI actually is, and what action can be readily taken in the imminent future by States to give effect to this conceptualisation.
The Way of Forward
Despite the Summit’s prominence in the multilateral debate about military AI, and the notable increase in State, industry and technologist participation at this second iteration, there was no announcement of a follow-on Summit. Speculation in the corridors was rife, however, about the next Summit taking place in either Nairobi or London – as Kenya and the United Kingdom had joined REAIM as co-sponsors.
The Summit enabled reflection about what progress has been made in the pursuit of responsible military AI and for a multiplicity of actors to undertake crucial dialogue about the needs of future governance efforts in this important area. Future Summits are needed and, as South Korea’s Minister of Foreign Affairs, Cho Tae-yul, noted during the conclusion of the Summit, there is still much work to be done in respect of the Blueprint for Action to ‘advanc[e] efforts to translate the principles suggested in the document into concrete actions and developing measures for their implementation’.
Lauren Sanders is an Adjunct Associate Professor in the School of Law, The University of Queensland.
Rain Liivoja is a Professor in School of Law, The University of Queensland.
Zena Assaad is a Senior Lecturer in the School of Engineering, Australian National University.
This article is republished from ANZSIL Perspective. Read the original article.