Close Menu
FintechFetch
    FintechFetch
    • Home
    • Fintech
    • Financial Technology
    • Credit Cards
    • Finance
    • Stock Market
    • More
      • Business Startups
      • Blockchain
      • Bitcoin News
      • Cryptocurrency
    FintechFetch
    Home»Fintech»Less Than 1% Can Identify a Deepfake Finds iProov Highlighting the Growing Threat of GenA
    Fintech

    Less Than 1% Can Identify a Deepfake Finds iProov Highlighting the Growing Threat of GenA

    FintechFetchBy FintechFetchFebruary 12, 2025No Comments5 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email


    “I could spot an AI deepfake easily.” This seems to be the attitude most people have towards artificial intelligence (AI) – )-generated videos and images. However, a warning has been issued from the science-based solutions for biometric identity verification firm, iProov, as it reveals just 0.1 per cent of 2,000 participants in its latest test were able to successfully identify ake content.

    AI-generated videos and images are often created with the purpose of impersonating people, and based on the findings from iProov, there is a large chance impersonators would be successful. Both US and UK consumers were tested, being told to examine a variety of deepfake content, including images and videos. iProov notes that in this test scenario, participants were told to look out for potential use of AI, but in the real world, when consumers are unsuspecting, the success rates of impersonation would likely be even higher.

    Andrew Bud, founder and CEO, iProov

    “Just 0.1 per cent of people could accurately identify the deepfakes, underlining how vulnerable both organisations and consumers are to the threat of identity fraud in the age of deepfakes,” says Andrew Bud, founder and CEO of iProov. “And even when people do suspect a deepfake, our research tells us that the vast majority of people take no action at all.

    “Criminals are exploiting consumers’ inability to distinguish real from fake imagery, putting our personal information and financial security at risk. It’s down to technology companies to protect their customers by implementing robust security measures. Using facial biometrics with liveness provides a trustworthy authentication factor and prioritises both security and individual control, ensuring that organisations and users can keep pace and remain protected from these evolving threats.”

    An imminent threat

    Deepfakes pose an overwhelming threat in today’s digital landscape and have evolved at an alarming rate over the past 12 months. iProov’s 2024 Threat Intelligence Report highlighted an increase of 704 per cent increase in face swaps (a type of deepfake) alone. Their ability to convincingly impersonate individuals makes them a powerful tool for cybercriminals to gain unauthorised access to accounts and sensitive data.

    Deepfakes can also be used to create synthetic identities for fraudulent purposes, such as opening fake accounts or applying for loans. This poses a significant challenge to the ability of humans to discern truth from falsehood and has wide-ranging implications for security, trust, and the spread of misinformation.

    iProov test findings

    When identifying which age ranges were most susceptible to deepfakes, 30 per cent of 55-64-year-olds were tricked by the content. Thirty-nine per cent of those aged over 65 hadn’t even heard of deepfakes. While this highlights the significant knowledge gap on the latest tech and how vulnerable this age group is to this growing threat, they weren’t the only ones unaware of deepfakes. Twenty-two per cent of all those partaking in the test had never even heard of deepfakes before the study.

    Despite their poor performance, people remained overly confident in their deepfake detection skills at over 60 per cent, regardless of whether their answers were correct. This was particularly so in young adults (18-34). While it is concerning that older generations have not heard of the technology, it is equally alarming that so many young people have such a false sense of security.

    When identifying the deepfakes, 36 per cent of participants struggled more with videos than images. This vulnerability raises serious concerns about the potential for video-based fraud, such as impersonation on video calls or in scenarios where video verification is used for identity verification.

    A question of trust

    Social media platforms are seen as breeding grounds for deepfakes with Meta (49 per cent) and TikTok (47 per cent) seen as the most prevalent locations for deepfakes to be found online. This, in turn, has led to reduced trust in online information and media— 49 per cent trust social media less after learning about deepfakes. Just one in five would report a suspected deepfake to social media platforms.

    Additionally, three in four people (74 per cent) worry about the societal impact of deepfakes, with ‘fake news’ and misinformation being the top concern (68 per cent). This fear is particularly pronounced among older generations, with up to 82 per cent of those aged 55+ expressing anxieties about the spread of false information.

    Less than a third of people (29 per cent) take no action when encountering a suspected deepfake which is most likely driven by 48 per cent  saying they don’t know how to report deepfakes, while a quarter don’t care if they see a suspected deepfake.

    Additionally, most consumers fail to actively verify the authenticity of information online, increasing their vulnerability to deepfakes. Despite the rising threat of misinformation, just one in four search for alternative information sources if they suspect a deepfake. Only 11 per cent of people critically analyse the source and context of information to determine if it’s a deepfake, meaning a vast majority are highly susceptible to deception and the spread of false narratives.

    Not all hope should be lost

    With deepfakes becoming increasingly sophisticated, humans alone can no longer reliably distinguish real from fake and instead need to rely on technology to detect them.

    To combat the rising threat of deepfakes, organisations should look to adopt solutions that use advanced biometric technology with liveness detection, which verifies that an individual is the right person, a real person, and is authenticating right now. These solutions should include ongoing threat detection and continuous improvement of security measures to stay ahead of evolving deepfake techniques.

    There must also be greater collaboration between technology providers, platforms, and policymakers to develop solutions that mitigate the risks posed by deepfakes.

    Professor Edgar Whitley, a digital identity expert at the London School of Economics and Political Science adds: “Security experts have been warning of the threats posed by deepfakes for individuals and organisations alike for some time. This study shows that organisations can no longer rely on human judgment to spot deepfakes and must look to alternative means of authenticating the users of their systems and services.”



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleSingapore to Consolidate National Payment Schemes Under New Entity
    Next Article Dogecoin Ready For A $2.43 Rally? Elliott Wave Says Yes
    FintechFetch
    • Website

    Related Posts

    Fintech

    Inside AI Assisted Software Development and why tools are not enough (Part 1): By John Adam

    June 22, 2025
    Fintech

    Inside AI Assisted Software Development and why tools are not enough (Part 2): By John Adam

    June 22, 2025
    Fintech

    Starting an EU payment or crypto firm? Here’s why you should consider setting up in Malta: By Ivan Aleksandrov

    June 22, 2025
    Add A Comment
    Leave A Reply Cancel Reply

    Top Posts

    Bitcoin Whales Accumulate as Short-Term Holders Capitulate—What This Mean for BTC

    April 1, 2025

    ACI Worldwide and Banfico Collaborate to Support European Payment Regulations

    February 8, 2025

    UniCredit Acquires Aion Bank and Vodeno in Move to Accelerate European Expansion

    March 9, 2025

    Why Lack of Accountability Is the Silent Productivity Killer

    March 17, 2025

    $MuskIt Token Takes Center Stage in the Future of Musk Institute

    February 26, 2025
    Categories
    • Bitcoin News
    • Blockchain
    • Business Startups
    • Credit Cards
    • Cryptocurrency
    • Finance
    • Financial Technology
    • Fintech
    • Stock Market
    Most Popular

    3 cheap UK shares to consider buying in May

    May 1, 2025

    Dogecoin Must Hold This Level—Or Risk A 30% Price Crash

    June 17, 2025

    Africa Crypto Week In Review: Ghana to Regulate Crypto, Binance To Adopt Stricter Measures in South Africa, Nigeria Intensify Crackdowns

    April 27, 2025
    Our Picks

    Best Crypto to Buy as States Embrace $BTC Reserves

    June 23, 2025

    Inside AI Assisted Software Development and why tools are not enough (Part 1): By John Adam

    June 22, 2025

    Housing market map: Zillow just released its updated home price forecast for 400-plus housing markets

    June 22, 2025
    Categories
    • Bitcoin News
    • Blockchain
    • Business Startups
    • Credit Cards
    • Cryptocurrency
    • Finance
    • Financial Technology
    • Fintech
    • Stock Market
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    • About us
    • Contact us
    Copyright © 2024 Fintechfetch.comAll Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.