Deepfakes in the financial world: A growing threat to trust and security – Insights from the FS-ISAC report

The increasing spread and development of deepfake technologies pose a growing threat to the financial industry. This advanced form of digital manipulation can mimic real-life scenes and voices, making it a powerful tool for fraudsters and criminals. A recent report by FS-ISAC entitled “Deepfakes in the Financial Sector: Understanding the Threats, Managing the Risks” sheds light on these dangers and provides valuable advice on how the financial sector can arm itself against these threats. At the same time, a study by Regula provides insight into the financial impact of deepfake fraud.

According to the latest study by Regula, the average losses caused by deepfake fraud in the financial sector amount to over USD 600,000 per company. Fintech companies reported average losses of USD 637,000, while traditional banks reported losses of around USD 570,000. Most alarmingly, 23% of financial organizations surveyed suffered losses of more than $1 million due to AI-generated fraud – double the global average.

These financial burdens also vary depending on the region. Mexico reported the highest average losses at USD 627,000, followed by Singapore at USD 577,000 and the USA at USD 438,000. In Germany, losses averaged USD 394,000, underlining the global urgency for robust security strategies.

Discrepancy between trust and reality

Although 56% of the companies surveyed stated that they felt confident in their ability to detect deepfakes, only 6% were actually able to avoid financial losses from such attacks. This discrepancy shows that many companies overestimate their ability to protect themselves against complex deepfake attacks. The need for effective protective measures is therefore great.

Deepfake fraud in the financial sector: scenarios and risks in detail

The spread of deepfakes in the financial sector is making it increasingly difficult to secure digital identities and transactions. These technological forgeries, which often appear deceptively genuine, offer criminals a wide range of attack opportunities. FS-ISAC’s report “Deepfakes in the Financial Sector: Understanding the Threats, Managing the Risks” provides detailed insights into the specific threats and the industry’s vulnerability to such manipulation. In the following, we highlight the various deepfake scenarios and analyze the potential risks for financial institutions.

1. C-suite impersonations and attacks on executives

Impersonating high-ranking executives such as CEOs or CFOs, who have far-reaching decision-making and access rights in many companies, is currently the most common method. Attackers use deepfakes to create audio or video files in which these executives appear to give instructions or approve financial transactions. An example from the FS-ISAC report shows how an attacker used a fake CFO in a Zoom conference to trigger a transfer of 25 million dollars.

As publicly available material about executives becomes increasingly available, it becomes easier for attackers to create deepfakes that look real to employees. Financial institutions should therefore pay particular attention to unusual communication channels for requests from C-level executives and introduce multi-level verification processes to intercept such attacks.

2. Fraud against end customers through manipulation of biometric data

Many financial institutions now rely on biometric verification methods, such as voice or facial recognition, to secure customer accounts. However, deepfake technology opens up new ways to circumvent these biometric systems. Fraudsters can use synthetic voices or facial data to successfully impersonate legitimate customers and conduct financial transactions. The report describes how deepfake voices can be used to trick voice verification systems and debit funds from bank accounts.

The risk for financial service providers is particularly high here, as the manipulation of biometric features was previously considered a very secure means of verification. However, deepfake technologies undermine this security and therefore jeopardize customer trust. To counteract this risk, experts recommend using behavioral analysis mechanisms and multi-factor authentication in addition to biometric authentication, as described in the FS-ISAC report.

3. Market manipulation through fake news and events

Another risk for the financial sector lies in the use of deepfakes for targeted market manipulation. Deepfake videos that simulate market-influencing events pose a real threat. An example from 2023 shows how a fake video of an explosion at the Pentagon was distributed via social media and caused the Dow Jones to fall by 85 points within minutes. Such fake news can be used specifically to create panic selling or artificial price rises, allowing attackers to make significant financial gains.

This type of threat aims to undermine public and investor confidence and influence market movements. Financial institutions should therefore focus on quick and effective measures to detect fake news and work closely with social networks and news services to prevent or slow down the spread of deepfakes.

4. Social engineering through deepfake-based phishing attacks

Voice phishing or vishing attacks also pose an increasing risk. Deepfake-generated voices are used to imitate familiar people and trick victims into handing over sensitive data or performing certain actions. The report describes how deepfake voices were used in targeted phone calls to trick employees into disclosing confidential information or authorizing transactions. Employees in customer service and IT support are particularly at risk, as these positions regularly deal with inquiries and identity checks.

The risk of such deepfake attacks is further exacerbated by the increasing professionalism and realism of fake voices. As such attacks specifically target human trust, the financial sector should focus more on employee training and specific training measures to increase awareness of such threats.

5. Forged identity documents and recruitment fraud

Another scenario concerns the creation of false identity documents using deepfake technology, which can lead to vulnerabilities in hiring and authentication processes. The report describes how a deepfake was used as proof of identity to gain access to financial resources or to fraudulently obtain employment. Companies also report cases in which deepfakes were used to circumvent compliance requirements, for example to integrate people into key positions without being noticed.

Areas such as customer authentication and the application process, where digital identity verification is increasingly being used, are particularly affected. The FS-ISAC therefore recommends that financial institutions carry out strict checks and rely on additional levels of identification to prevent such fraud attempts.

Conclusion: The need for a comprehensive approach to security

The scenarios described show how diverse the threats posed by deepfakes in the financial sector are. Each type of attack – from C-suite impersonation to biometric manipulation to targeted phishing attacks – exploits human trust and technical vulnerabilities to harm organizations. The financial industry faces the challenge of combating these threats through innovative technologies and comprehensive security strategies.

As the FS-ISAC report and the Regula study make clear, a reactive approach is not enough. Financial institutions should take proactive measures, including:

  • Implementation of advanced deepfake detection systems: The use of technologies such as DeepDetectAI, which combine image, audio and behavioral analytics, can help detect and stop deepfake-based fraud attempts at an early stage.
  • Multifactorial authentication processes: In addition to biometric features, additional authentication factors should be introduced to ensure greater security.
  • Comprehensive employee training: Raising employee awareness of deepfake threats is essential, especially in high-risk positions such as customer service and financial management.

The threat of deepfakes is dynamic and requires continuous adaptation and innovation in the security sector. By combining technology solutions, training and strategic partnerships, the financial sector can sustainably strengthen its resilience to this new type of cyber threat and protect the trust of its customers.

Sources:

https://www.businesswire.com/news/home/20241031956820/de

https://www.fsisac.com/knowledge/ai-risk

Share the Post:

Related Posts

EN