Report includes first-of-its-kind taxonomy of deepfake risks, threat scenarios, controls and mitigations specific to the financial services sector.
FS-ISAC, the member-driven, not-for-profit organisation that advances cybersecurity in the global financial system, has announced that its Artificial Intelligence Risk Working Group has published Deepfakes in the Financial Sector: Understanding the Threats, Managing the Risks.
This is a first-of-its-kind Deepfake Taxonomy designed to help senior executives, board members and cyberleaders address emerging risks posed by deepfakes.
Deepfakes – synthetic media generated using advanced AI – have become increasingly sophisticated, enabling threat actors to impersonate executives, employees and customers to bypass conventional security measures. By exploiting the human element of trust that underpins financial transactions and decision-making processes, deepfakes allow cybercriminals to defraud financial institutions and their customers, steal money and data, and sow confusion and disinformation.
FS-ISAC outlines financial institutions’ risks, including information security breaches, market manipulation, direct fraud against customers and clients, and reputational harm from disinformation campaigns. According to a recent report, the projected losses from deepfake and other AI-generated frauds are expected to reach US$40 billion in the US alone by 2027, making it imperative for institutions to take decisive action.
“The potential damage of deepfakes goes well beyond the financial costs to undermining trust in the financial system itself,” said Michael Silverman, Chief Strategy & Innovation Officer, FS-ISAC. “To address this, organisations need to adopt a comprehensive security strategy that promotes a culture of vigilance and critical thinking to stay ahead of these evolving threats.”
While the threats posed by deepfakes to financial institutions are significant and evolving, the taxonomy of deepfake threats outlined in Deepfakes in the Financial Sector: Understanding the Threats, Managing the Risks helps financial firms determine the threat categories of greatest risk to them and the controls to implement to mitigate those risks.
The path forward lies in strengthening existing controls and processes and educating employees and customers. In coming weeks, detailed guidance for the technologists who implement security measures will be published in two subsequent documents.
Hiranmayi Palanki, Distinguished Engineer at American Express and Vice Chair of FS-ISAC’s AI Risk Working Group, said: “Addressing deepfake technology requires more than just technical solutions – it also demands a cultural shift. Building a workforce that is alert and aware is crucial to safeguarding both security and trust from the potential threats posed by deepfakes.”
“Deepfakes technologies are advancing very quickly, but our known controls can mitigate a lot of the risk,” said Lisa Matthews, Senior Director, Cybersecurity Compliance at Ally Financial and member of the AI Risk Working Group. “The taxonomy we’ve developed allows firms to build a holistic and methodical approach that is flexible enough to adapt as the technologies advance.”
You can download the paper here.