Examining social engineering tactics and the escalating risks facing organisations, SoSafe, the security awareness and human risk management solution, has revealed that AI’s usage in cyberattacks has skyrocketed, with nearly nine in 10 respondents encountering some form of attempted breach.
The SoSafe report, called 2025 Cybercrime Trends, explored the views of 500 global security professionals as well as 100 SoSafe customers across 10 countries. AI was a major talking point raised by the respondents, with 91 per cent of all security experts anticipating a significant surge in AI-driven threats over the next three years.
Only 26 per cent express high confidence in their ability to detect these attacks. Given that 87 per cent of firms admitted to experiencing some form of AI-driven cyberattack in the last year, it is extremely concerning that just over one in four firms are confident in their own abilities to deal with an attack.
The SoSafe report found that obfuscation techniques, such as AI-generated methods to mask the origins and intent of attacks, were cited as the top concern by over 51 per cent of security leaders. Additionally, 45 per cent reported that the creation of entirely new attack methods was their biggest worry, while two fifths (38 per cent) cited the scale and speed of automated attacks.
Advancements in AI are enabling multichannel cyberattacks, blending tactics across email, SMS, social media and collaboration platforms. Ninety-five per cent of cybersecurity professionals agree they’ve noticed an increase in this style of attack in the past two years. A clear example is the attack on WWP‘s CEO, where the attackers combined WhatsApp to build trust, Microsoft Teams for further interaction, and an AI-generated deepfake voice call to extract sensitive information and money.
Andrew Rose, CSO, SoSafe, commented: “Targeting victims across a combination of communications platforms allows them to mimic normal communication patterns, appearing more legitimate,” said Rose. “Simplistic email attacks are evolving into 3D phishing, seamlessly integrating voice, videos or text-based elements to create AI-powered, advanced scams.”
Using AI can stop cyberattacks, but also open up new attack avenues
Organisations must adopt AI to stay competitive in today’s landscape – its ability to meet consumer demand for personalisation means it is a must-have. While providing a tailored experience, it can also scan for bad actors at a much faster rate than human risk observers could. Nonetheless, the tech isn’t impenetrable.
The adoption of in-house AI is inadvertently expanding organisations’ attack surfaces, subjecting them to new, innovative attacks such as data poisoning and AI hallucinations. In fact, SoSafe’s survey found that 55 per cent of businesses have not fully implemented controls to manage the risks associated with their in-house AI solutions.
Rose added: “Even the benevolent AI that organisations adopt for their own benefit can be abused by attackers to locate valuable information, key assets or bypass other controls. Many firms create AI chatbots to provide their staff with assistance, but few have thought through the scenario of their chatbot becoming an accomplice in an attack by aiding the attacker to collect sensitive data, identify key individuals and identify useful corporate insight.
“It is imperative that businesses couple their own AI adoption with a rigorous approach to security that protects against both technological and human vulnerabilities.”
With training, AI concerns don’t outweigh its benefits

While there are a lot of concerns surrounding AI, Niklas Hellemann, CEO of SoSafe, assures that AI is still one of the biggest allies an organisation can have in the fight against fraud. However, employees must be properly trained.
He says: “While AI undoubtedly presents new challenges, it also remains one of our greatest allies in protecting organisations against ever-evolving threats. However, AI-driven security is only as strong as the people who use it.
“Cybersecurity awareness is critical. Without informed employees who can recognise and respond to AI-driven threats, even the best technology falls short. By combining human expertise, security awareness and the careful application of AI, we can stay ahead of the curve and build stronger, more resilient organisations.”