media release (24-238MR)

ASIC warns governance gap could emerge in first report on AI adoption by licensees

Published

ASIC is urging financial services and credit licensees to ensure their governance practices keep pace with their accelerating adoption of artificial intelligence (AI).

The call comes as ASIC’s first state of the market review of the use and adoption of AI by 23 licensees found there was potential for governance to lag AI adoption, despite current AI use being relatively cautious.

ASIC Chair Joe Longo said making sure governance frameworks are updated for the planned use of AI is crucial to licensees meeting future challenges posed by the technology.

‘Our review shows AI use by the licensees has to date focussed predominantly on supporting human decisions and improving efficiencies. However, the volume of AI use is accelerating rapidly, with around 60% of licensees intending to ramp up AI usage, which could change the way AI impacts consumers,’ the Chair said.

ASIC’s findings revealed nearly half of licensees did not have policies in place that considered consumer fairness or bias, and even fewer had policies governing the disclosure of AI use to consumers.

‘It is clear that work needs to be done—and quickly—to ensure governance is adequate for the potential surge in consumer-facing AI,’ Mr Longo said.

Mr Longo said AI could bring significant benefits, but without governance processes keeping pace, significant risks could emerge.

‘When it comes to balancing innovation with the responsible, safe and ethical use of AI, there is the potential for a governance gap – one that risks widening if AI adoption outpaces governance in response to competitive pressures,’ Mr Longo said.

‘Without appropriate governance, we risk seeing misinformation, unintended discrimination or bias, manipulation of consumer sentiment and data security and privacy failures, all of which has the potential to cause consumer harm and damage to market confidence.’

Mr Longo said licensees must consider their existing obligations and duties when it comes to the deployment of AI and avoid simply waiting for AI laws and regulations to be introduced.

‘Existing consumer protection provisions, director duties and licensee obligations put the onus on institutions to ensure they have appropriate governance frameworks and compliance measures in place to deal with the use of new technologies,’ Mr Longo said. ‘This includes proper and ongoing due diligence to mitigate third-party AI supplier risk.’

‘We want to see licensees harness the potential for AI in a safe and responsible manner—one that benefits consumers and financial markets. This can only happen if adequate governance arrangements are in place before AI is deployed.’

Understanding and responding to the use of AI by financial firms is a key focus for ASIC, which made addressing the poor use of AI a key focus area in its latest Corporate Plan.

ASIC will continue to monitor how licensees use AI as it has the potential to significantly impact not just consumer outcomes, but the safety and integrity of the financial system. Where there is misconduct, ASIC will take enforcement action if appropriate and where necessary.

Download

Background

ASIC reviewed AI use across 23 licensees in the retail banking, credit, general and life insurance and financial advice sectors, where AI interacted with or impacted consumers.

During 2024, ASIC analysed information about 624 AI use cases that were in use or being developed, as at December 2023, and met with 12 of the 23 licensees in June 2024 to understand their approach to AI use and how they were considering and addressing the associated consumer risks.

Media enquiries: Contact ASIC Media Unit