British financial regulators are currently engaged in urgent discussions with the government’s cyber security agency and major banks to assess potential risks posed by the latest artificial intelligence model from Anthropic, according to a Financial Times report on Sunday. Officials from the Bank of England (BoE), Financial Conduct Authority (FCA), and the Treasury are in talks with the National Cyber Security Centre (NCSC) to examine vulnerabilities in critical IT systems that Anthropic’s new AI model could highlight.
These high-level discussions aim to scrutinise potential weak points in essential financial infrastructure. Representatives from major British banks, insurers, and exchanges are anticipated to receive a briefing on the cyber security implications of the model, known as Claude Mythos Preview, at a meeting with regulators scheduled for the coming fortnight. This initiative follows reports that U.S. Treasury Secretary Scott Bessent also convened a meeting with major Wall Street banks to discuss the model’s cyber risk potential.
Anthropic, an AI startup, describes the Claude Mythos Preview model as part of “Project Glasswing,” a controlled initiative allowing select organisations to utilise the unreleased model for defensive cyber security purposes. The company develops advanced AI systems designed to be helpful, harmless, and honest. In a recent blog post, Anthropic stated that the model has already identified thousands of significant vulnerabilities across operating systems, web browsers, and other widely used software. Anthropic did not respond to a Reuters request for comment, while the BoE, FCA, NCSC, and UK Treasury declined to comment or were unavailable.