Australia’s corporate watchdog, the Australian Securities and Investments Commission (ASIC), has issued an urgent directive to the nation’s financial sector, compelling immediate action against potential cyber risks stemming from frontier artificial intelligence systems such as Mythos. On Friday, ASIC published a letter to the financial services industry, underscoring the critical need for robust cybersecurity practices. ASIC Commissioner Simone Constant highlighted the varied preparedness across Australian financial services organisations, stressing that more work is crucial to keep pace with rapid AI advancements. Constant warned that risks, previously on a longer horizon, could now emerge “incredibly quickly,” with the potential for non-state actors to “weaponise” these capabilities.
Macquarie chief executive Shemara Wikramanayake acknowledged these evolving threats, stating the bank is undertaking substantial technology programs to test its resilience against frontier AI models. Wikramanayake noted that Mythos, a system with high-level coding capabilities, has unearthed numerous long-standing vulnerabilities across various systems. Anthropic, an AI safety and research company that develops frontier AI models, launched Claude Mythos Preview under its tightly restricted Project Glasswing program, involving major technology firms including Amazon, Microsoft, Nvidia, and Apple. However, Wikramanayake added that businesses not part of Project Glasswing are independently working to identify and patch their own vulnerabilities.
This latest warning from ASIC echoes a similar concern raised by Australia’s banking regulator last month, which observed that the domestic financial services industry’s information security practices are struggling to keep up with the pace of AI change. Commissioner Constant reiterated the gravity of the situation, declaring, “The clock is at a minute to midnight – if you aren’t on top of your cyber resilience already, the time to act and prepare is right now.” Further research published by the Cambridge Centre for Alternative Finance in April indicated that financial institutions are adopting AI at more than double the rate of their regulatory supervisors, with only two in ten watchdogs reporting “advanced AI adoption,” raising questions about regulators’ ability to monitor emerging harms effectively.