The rising threat of AI and deepfake scams in cybercrime

Email Security >

The rising threat of AI and deepfake scams in cybercrime

By Cian Fitzpatrick | 28th November 2024

The advances being made in artificial intelligence (AI) are shouted from the rooftops on an almost daily basis.  What is less well-known is the emergence of deepfake technologies, and how they are becoming powerful tools for cybercriminals.

It’s not difficult to see how the advances in deepfake scams are enabling bad actors  to commit large-scale fraud. 

A striking example of this occurred in May 2024  when the UK engineering firm Arup suffered a $25 million loss in a deepfake attack. A finance team member in their Hong Kong office was tricked into transferring funds after attending a video conference where cybercriminals impersonated Arup’s Chief Financial Officer (CFO) using cloned voice and video. The realistic nature of the interaction, complete with other supposed colleagues (who were also fakes), convinced the employee of its authenticity, leading to the fraudulent transfer.

This incident underscores a stark reality. Today seeing is no longer believing. AI is becoming more accessible and sophisticated, and the potential for harm is growing exponentially. 

a robot hand with AI letters on it

AI’s effect on cybercrime

AI is reshaping the “traditional” tactics of cybercrime, refining older methods like phishing. At the same time, it’s  also introducing entirely new threats. Deepfakes have emerged as a dangerous tool in the hands of hackers. The Arup incident is just one of many alarming examples.  

Synthetic voices are now almost indistinguishable from real ones. In addition, AI can create video replicas and generate text in the style of specific individuals with ease – and quickly. This evolving technology significantly raises the stakes. 

The Arup case highlights a significant psychological dimension to these scams. When employees see and hear a familiar colleague, their natural defences are lowered. This highlights a key vulnerability: the human factor. Nearly 74% of all cyber attacks stem from human error.

 While technology can identify and block many threats, the innate trust people place in what they see and hear can be exploited with devastating consequences.  

Deepfakes are not the only threat. 

AI-powered social engineering techniques are giving hackers an edge, even as traditional cybersecurity measures improve. Social engineering manipulates individuals into sharing sensitive information or transferring money, often through convincing but fraudulent communications. 

Despite increased investments in cybersecurity, human error remains a weak point. Phishing emails, fake phone calls and now hyper-realistic video conferences can easily slip through even the most vigilant defences.

A new era of cyber threats

AI has made the hacking methods we’ve seen for years, like phishing, more effective while also enhancing social engineering scams. 

For example, natural language processing tools can now draft highly convincing phishing emails. Even more alarmingly, cybercriminals are finding ways to bypass multi-factor authentication (MFA), which is a cornerstone of digital security. Once considered the gold standard, MFA is no longer as reliable in protecting sensitive data or giving businesses (and people) peace of mind.

It’s not an exaggeration to say deepfake technology represents a significant shift in cyber threats. These tools can convincingly replicate a person’s appearance and voice. This makes it increasingly difficult to distinguish between genuine interactions and manipulations. The ability to deceive employees into believing they’re communicating with trusted colleagues via video calls is a growing risk.  

Deepfakes also heighten ransomware threats. As these technologies evolve rapidly, legal frameworks are scrambling to keep up. For instance, only recently did the UK make it illegal to create non-consensual, explicit deepfake images. Alarmingly, deepfake scams are now targeting younger victims, including students, with perpetrators demanding ransom to prevent the release of doctored images.

While deepfake technology is already advanced, it still has limitations currently. Cybercriminals cannot yet conduct real-time, interactive deepfake conversations convincingly, and subtle mismatches in language or tone may sometimes reveal the scam. However, the pace of innovation suggests these limitations won’t last long.

Another concern is the rise of AI-powered tools that streamline other types of cyberattacks, such as spear phishing and credential stuffing. Hackers can now train AI to mimic not just the tone but also the behavioural patterns of high-level executives. This increases the likelihood of successful fraud. The fusion of AI with other emerging technologies, such as blockchain for anonymising transactions, further complicates the fight against cybercrime. This development also shows the increase in sophistication hackers and cyber criminals are developing.

Protecting against the deepfake threat

As deepfake and AI-driven cybercrimes become more sophisticated, organisations must strengthen internal safeguards to reduce human error. Training your team to be on the alert for deepfake threats is one of the biggest safety moats you can build around your business.

One key strategy is implementing robust payment authorisation protocols to prevent unauthorised transactions.  

Staff need to recognise potential deepfake scams and understand the risks posed by increasingly advanced cyber threats. Building a workplace culture that prioritises vigilance and adherence to best practices can make a significant difference.

AI needn’t only be in the hands of malicious actors.

Organisations should also consider adopting advanced cybersecurity technologies that leverage AI for defensive purposes. AI-powered detection systems, for instance, can help identify anomalies in communications, such as subtle inconsistencies in voice patterns or metadata irregularities in video files. These tools can add an extra layer of security to counteract the rapid evolution of AI-driven threats.

Ultimately, defending against deepfake and AI-powered threats requires a combination of technological, procedural and cultural approaches. By fostering resilience and adapting to this ever-changing landscape, businesses can better safeguard themselves against the rising tide of cybercrime.

Topsec Cloud Solutions partners with clients to keep their organisations safe. Numerous testimonials from the clients we work with can be found on our website. Contact us today for a no obligation call on how we might help you strengthen your organisation’s defences against cybercrime. 

Protect yourself against deepfake scams

Contact Us
error: Content is protected !!

Online Risks: What You Don’t See Could Hurt You

Deep dive into some of the online risks and how you can protect yourself from these risks.