Could you tell if a voice was an AI or your boss?

By Dr Jessica Barker MBE, below, author of Hacked: The secrets behind cyber attacks

When I was growing up, the idea of considering whether a artificial intelligence (AI) voice could be distinguishable from the voice of a human being was for the realms of science fiction. A lot has changed since then.

In today’s digital age, where AI continues to push the boundaries of innovation, the line between human and machine is becoming increasingly blurred. From chatbots handling customer queries to presentations being translated and voiced in multiple languages, AI-powered voices have the potential to enhance various aspects of our professional lives.

While there are many legitimate advantages of using AI-generated voice (and video), cyber criminals have also identified malicious opportunities, too. Now, the question arises: could you tell the difference between the voice of an AI and that of your boss?

Voice synthesis technology has come a long way since its inception. Gone are the days of robotic, monotone voices characteristic of early AI systems. Today, AI voices boast natural cadences, intonations, and even regional accents, often making them indistinguishable from human speech to the untrained ear.

Deepfake technology, which emerged in 2017, enables anyone to swap faces and voices. A few years ago, this took technical skill, time, and a lot of data. Now, multiple apps and websites have sprung up, lowering the barrier to entry, making the production of deepfakes much quicker and easier.

Cybercriminals are already exploiting this, taking impersonation and phishing to the next level. Numerous cases are hitting the headlines, with even more going unreported.

As I cover in my book ‘Hacked: The Secrets Behind Cyber Attacks’, the first (known) case of AI-enabled voice fraud was in 2019. The CEO of a U.K. energy firm received a call that seemed to be from his boss, the CEO of the firm’s parent company in Germany, who asked him to urgently send funds to a supplier. After the victim complied, he received another call saying the funds had not been received and he should make a further payment; because this call was from an Austrian phone number, the victim became suspicious, did not make the second payment and the deepfake scam was identified. The transferred funds were subsequently tracked through a bank account in Hungary, to Mexico and then on to other locations.

It’s not just voice – but video, too. In February 2024, an employee at a Hong Kong company was duped into paying HK$200m (£20m/$25m) of her firm’s money to fraudsters in a deepfake video conference call, where the criminals posed as senior officers of the company, including the Chief Financial Officer.

This is a new level of social engineering – the kind of psychological manipulation where criminals trick us in phishing emails, calls or messages, deceiving us into clicking malicious links, downloading malware-ridden documents or sharing information or money. For many years now, cyber criminals have been using social engineering as a core element of most cyber attacks. Now, with AI, they can add greater speed, scale, and sophistication to their nefarious activities. It can be hard enough to spot a well-crafted phishing email, let alone a phonecall or video with someone who looks or sounds just like the person they are impersonating.

However, not all cases hitting the headlines have an unhappy ending. An employee at the security company LastPass was targeted in April 2024 with a deepfake audio call that impersonated their CEO. They identified this was a scam due to the call being made over WhatsApp – outside of their usual business communication channels – and other red flags on the call, including forced urgency.

This case is a great example of how we can spot AI-enabled deception. As deepfakes become more technically advanced, we must engage our critical thinking skills more deeply to be able to identify anything out of the ordinary.

We cannot trust based on sight and sound alone. I share the same advice for social engineering of all kinds, whether over email or a video call – be tuned into whether a communication is unexpected or unusual, be aware when your emotional buttons are being pressed and take a pause to verify identities and information before trusting what you are seeing or hearing. If you receive an urgent, unusual request from your boss that causes an alarm bell to ring, don’t dismiss that instinct. Taking a moment to verify the request could save a lot of money and stress.

AI shows how cyber criminals can use technology to evolve their tactics, and we must do the same to advance our defences. When we can’t believe our eyes and ears, an anti-scam mindset becomes even more critical. Digital critical thinking is the key to knowing whether we are being manipulated.

Dr Jessica Barker MBE is the author of Hacked: The Secrets Behind Cyber Attacks, published by Kogan Page