Menu close

Big increase in deepfakes: Banks risk being caught off guard

New report warns of an escalation of deepfake attacks and calls for rapid investment in preventative measures by banks.

29. Jan 2025
2 min
English / Dansk

Armed with state-of-the-art AI technology in the form of advanced deepfakes, fraudsters and criminals are pushing Trojan horses through the financial sector's digital town gates on an unprecedented scale. 

As AI has evolved, the number of deepfake attacks has skyrocketed, according to a new report. 

Over the past three years, deepfake fraud attempts have surged from almost nonexistent to 6.5% 

It has also gained the attention of the financial sector, in which 76% of decision makers now recognise AI-based identity fraud as a threat. 

That is apparent from a study, The Battle Against AI-driven Identity Fraud,conducted by the European company Signicat, which specialises in digital identity solutions. 

Prevention measures struggle to keep up

However, according to the survey, effective countermeasures against the new AI-based attack methods are lagging behind. Only 22% of financial services organisations have started implementing AI-powered fraud prevention measures.

This leaves the vast majority unprotected against increasingly sophisticated attacks using deep fakes, which only look set to intensify in 2025. 

"The gap between awareness and action is widening, creating a ticking time bomb, especially for the financial sector and other regulated industries," Pinar Alpay, Chief Product & Marketing Officer at Signica, says in a statement.

(Artiklen fortsætter efter boksen)

AI-based identity fraud

  • Deepfakes: Manipulated videos or audio recordings that make it difficult to distinguish between real and fake content.
  • Identity theft: AI is used to create realistic replicas of credentials or biometric data, for example.
  • Recommended countermoves: These include AI-driven fraud prevention measures and real-time fraud detection, multi-layered defence (KYC, continuous monitoring), employee awareness training and close collaboration with vendors.

A ticking time bomb

The report emphasises 2025 as a tipping point in which AI fraudsters will accelerate their activities by combining multiple methods and further escalate the number of attacks.

Deepfakes are just one example of how the threat landscape is rapidly evolving, the report says. 

Based on responses from 1,200 fraud decision makers at banks, fintechs, payment providers and insurance companies in Europe, Signicat's survey highlights three key reasons for the limited progress in terms of curbing deepfake attacks. 

  1.  Lack of expertise: 76% of respondents cite inadequate skills as a major barrier.
  2.  Lack of time: 74% do not believe they have the time to address the problem with the urgency it requires
  3. Budget shortfalls: 76% report insufficient funding to deploy robust fraud prevention technologies.

AI vs. AI

According to Signicat, financial services companies should prioritise a multi-layered defence strategy ranging from early risk assessment to robust identity verification and authentication tools. 

“Relying on obsolete solutions is the opposite of what’s needed,” Pinar Alpay explains, while pointing out the importance of using AI-based systems to detect and combat AI fraud.

“Organisations must invest in new technologies that enable AI-based fraud detection,” she  adds.

For more information on deepfake fraud and AI-based identity protection, Signicat refers to its expert blog.

Latest news