Mandatory guardrails for AI in high-risk settings

In September 2024, the Australian Government’s Department of Industry, Science and Resources released for public consultation Introducing Mandatory Guardrails for AI in High-risk Settings: Proposals Paper. APILS made the following submission, which was drafted by Rain Liivoja, Natalia Jevglevskaja and Lauren Sanders.

Thank you for the opportunity to make a submission in relation to the Proposals paper for introducing mandatory guardrails for AI in high-risk settings. As a not-for-profit specialising in legal issues relating to global and regional peace and security, we do not seek to address all of the issues raised in the proposals paper, but rather limit ourselves to three points.

Exempt uses of AI—Defence

Under question 1 for this consultation, the Department asks for an identification of ‘categories of uses that should be treated separately, such as uses for defence or national security purposes’. 

While there are certain uses of AI in Defence that are best addressed through separate regulatory mechanisms, a complete exemption of Defence from the mandatory guardrails is unnecessary and inappropriate.

There is a lively international discussion regarding the governance of military uses of AI, as exemplified by the recent Responsible AI in the Military Domain (REAIM) Summit, with some 2,000 participants and culminating in a Blueprint for Action endorsed by 61 governments.1 In parallel, there has been a difficult discussion about autonomous functionality in weapon systems (which may be AI-facilitated) over the past 10 years. 

These discussions are complex. Most participants acknowledge that all civilian guidelines concerning the use of AI are not directly transferable to the military context. This may partly explain the carveout of military applications of AI from various supranational instruments, such as the Council of Europe Framework Convention on AI2 and the European Union AI Act.3 

In particular, the use of AI in combat operations in the context of an armed conflict is governed by rules and principles of international law, especially international humanitarian law. This body of law involves a careful balancing on humanitarian and military considerations, and any guidelines on the use of AI would need to reflect that equilibrium. Thus, it would be imprudent to regulate this area domestically by means of principles designed for civilian applications of AI.

That having been said, there are circumstances where Defence may seek to use AI but where it should not be treated any differently from other government entities. For example, the proposals paper identifies as high-risk—thus subject to the proposed mandatory guardrails—the use of AI to identify individuals, manage access to public services, determine educational outcomes, or make determinations about individuals in an administrative or judicial setting. 

These are all conceivable uses of AI for Defence. For instance, Defence may wish to use AI-enabled facial recognition for the purposes of access to Defence sites. It may also utilise AI to assist in choosing individuals for enlistment, commissioning or employment, or to assist in making determinations in administrative or military discipline proceedings about an individual.

There is no principled reason for exempting Defence from mandatory guardrails with respect to this type of AI use. To the contrary, Defence members are—subject to certain limitations4—beneficiaries of the same human rights and fundamental freedoms as other Australians, which are the rights and freedoms that the proposed guardrails seek, among other things, to protect.

Further, in the context of potential Defence responses to domestic violence, it is conceivable that ADF elements would utilise AI-enabled capabilities to achieve the intended effect of the call out order.5 In these circumstances, as the ADF elements would be operating under a peacetime legal framework, domestic law would dictate how the ADF can operate. The Defence Act provides a broad authority to undertake actions that would be unlawful in other contexts; by convention ADF would operate by seeking an effect to support the civilian authorities, rather than dictating how the threat should be addressed, with Defence retaining command of personnel and discretion as to the capabilities used to respond to the threat specified in the call out order. 

While there may be valid justification in an emergency situation for ADF to use high-risk AI capabilities that are unavailable to law enforcement agencies, two specific issues would need to be considered:

  1. the suitability of authorising the use of particular AI capabilities, having regard to limitations placed upon law enforcement agencies for similar use;
  2. the manner in which data collected by the AI-enabled capabilities is stored, retained or disposed of following the conclusion of the call out.

In respect of the first issue, the extent to which Defence is exempt from guardrails should be considered in light of the spectrum of potential uses for these capabilities beyond Defence’s core business of defending the nation and its national interests. If it is considered that the guardrails should apply to the use of certain AI-enabled capabilities in a law enforcement context then this should equally be considered applicable to ADF activities in support of the civil authority. This consideration is even more acute having regard to the 2018 amendment to the threshold for call out in section 33 of the Defence Act. This provision now relates to whether the support will likely ‘enhance the ability of each of those States and Territories to protect the Commonwealth interests against the domestic violence’, rather than whether law enforcement capabilities are insufficient or exhausted.

As for the second issue, guardrails should create expectations with regard to the handling and management of personal information collected during the situation of call out, and how these expectations might apply to AI-enabled systems. Consideration must also be given to how collected information would be transferred back to the civil authority at the conclusion of the call out authority (particularly for post-incident prosecution purposes), if those very civilian authorities would be unable to collect that information independently of ADF support.

In summary, any exemption of Defence from mandatory guardrails should only occur on the basis of a careful analysis of how this exemption would operate, with specific applications in mind, and with the requirement that Defence put in place transparent, fit-for-purpose, military-specific guardrails.

Regulatory options

Under questions 13–16 for this consultation, the Department asks which of the proposed legislative and regulatory options would best address the use of AI in high-risk settings and ensure that the proposed AI guardrails can adapt and respond to step-changes in technology.

The proposals paper rightly notes that each of the three options offers its distinct benefits but also its own challenges. Indeed, none of the options have been identified as optimal. The few international precedents for the AI-specific regulation—the EU AI Act and the Canadian Artificial Intelligence and Data Act—are yet to be tested in practice and demonstrate their suitability and effectiveness. Thus, whichever approach Australia takes, its regulation and enforcement are likely to involve a learning-by-doing exercise, informed by the experiences and insights from the earlier stages of roll-out.

Having said this, we believe that option 1—a domain specific approach involving adaptation of existing legislative and regulatory frameworks—is the least suitable option for regulating the use of AI in high-risk settings. While this approach may be the most responsive to specific industry needs, it risks creating inconsistency across different sectors and ‘siloed’ regulation. In particular, as many AI technologies apply across sectors, businesses operating in multiple sectors risk facing staggered and also different obligations and, as a result, increased regulatory burden.

Option 2 is the best suitable pathway to regulating high-risk AI. Reflecting on Australia’s experience with its Consumer Data Right (CDR) regime—Australia’s world-leading reform on cross-sectoral sharing of consumer data6—we believe that adopting a principle-based framework that guides existing legislative and regulatory arrangements offers a consistent approach to AI regulation and promotes unified standards through a single, central piece of legislation.  Australia’s experience with the CDR shows that laying out foundational principles in an umbrella framework and subsequently concretising them in sector-specific regulations to account for sector-specific needs and challenges offers a meaningful approach to economy-wide regulation of data-sharing, especially when the rapid pace of technological advancement demands a faster yet more detailed response to an issue. Because of its practical advantages, Australia’s approach to regulating data-sharing systems is being increasingly adopted by other jurisdictions (including New Zealand, the UK, and the EU).

Conversely, option 3, which involves introducing a new cross-economy AI-specific Act risks creating regulatory duplication, increased compliance load and—were a central AI regulator established—also likely confusion about the ‘right door’ in interaction with regulators. In 2017, reflecting on possible options for regulating nation-wide data sharing, the Australian Government’s Productivity Commission recommended that a new Data Sharing and Release Act be created.7 This suggestion was rejected, however, on grounds that the development of such an Act would be a significant and labour-intensive undertaking that, if rushed, could result in mistakes and unforeseen repercussions.8 For the same reasons, any attempt to implement a new cross-economy AI-specific Act should be approached with particular caution.

Thank you again for the opportunity to participate in this consultation on a timely and important initiative. Please do not hesitate to contact us on info@apils.org if we can be of further assistance.

1  See generally Zena Assaad, Lauren Sanders and Rain Liivoja, ‘Global powers are grappling with “responsible” use of military AI: What would that look like?’ (The Conversation, 16 September 2024).

2Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (5 September 2024) CETS No 225, article 3(4): ‘Matters relating to national defence do not fall within the scope of this Convention.’

3Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) [2024] OJ L, 12.7.2024, article 2(3).

4  See generally Rain Liivoja and Alison Duxbury, ‘Human rights of service personnel’ (2019) 28(2) Human Rights Defender 13-15.

5 This is regardless of whether call out occurs under Part IIIAAA of the Defence Act 1903 (Cth) or under the prerogative power to the extent the government considers Part IIIAAA has not displaced this prerogative.

6 Launched in Australia in July 2020, the CDR gives consumers the right to determine whether the data businesses hold about them be released to other providers of their choice so these providers could offer them better value for money service. Because the regime governs decision-making in a consumer and competition-focused environment, it is embedded in the Competition and Consumer Act 2010 (Cth), Part IVD. This primary legislation is given effect through subsidiary instruments: sector designation instruments, CDR Rules and technical data standards. See generally, Australian Government, ‘Giving You Choice and Control’ <www.cdr.gov.au>.

7 Productivity Commission, Australian Government, Data Availability and Use (Inquiry Report No 82, 31 March 2017) 2.

8 Treasury, Australian Government, Review into Open Banking: Giving Customers Choice, Convenience and Confidence (Report, December 2017) 11.

Image: Mehaniq41 / Envato Elements