The increasingly frequent use of cloned family voices with artificial intelligence to carry out telephone scams (and how to protect yourself)

Artificial intelligence (AI) is being used to make calls that imitate the voices of known people to scam the recipient.

These calls use what is known as generative AI, that is, systems capable of creating text, images or any other medium, such as video, based on a user’s instructions.

Deepfakes (deep fakes) have gained notoriety in recent years with a series of high-profile incidents. impact, such as the use of the likeness of British actress Emma Watson in a series of suggestive ads that appeared on Facebook and Instagram.

There’s also the widely shared and debunked 2022 video of the president Ukrainian Volodymyr Zelensky appeared to be telling Ukrainians to lay down their arms.

Now the technology to create fake audio, a realistic copy of a person’s voice, is becoming increasingly common.

Getty Images Fraudsters can find the material they need to carry out their scams online.Giving weapons to the enemy

To create a realistic copy of someone’s voice, data is needed to train the algorithm. This means having many audio recordings of the person’s voice.

The more examples of the person’s voice can be fed into the algorithms, the better and more convincing the final copy will be.

*100015 *Many of us already share details of our daily lives on the internet. This means that the audio data needed to create a realistic copy of a voice could be readily available on social media.

But what happens once the copy is out there? What is the worst that can happen? A deepfake algorithm could allow anyone who owns the data to make you say whatever they want.

In practice, this can be as simple as typing text and having the computer say it out loud as if it were your voice.

Getty ImagesIn 2022, a fake video of President Zelensky circulated, made with AI, in which he asked the Ukrainians to surrender to the Russians.The main challenges

This possibility may increase the risk of increasing the prevalence of misinformation. It can be used to try to influence international or national public opinion, as seen with Zelensky’s videos.

But the ubiquity and availability of these technologies also poses significant challenges at the local level, especially in the increasing trend of AI calls to scam.

Many people will have received a scam or phishing call telling us, for example, that our computer has been compromised and that we need to log in immediately, which could give the caller access to our data.

Scams are often very easy to spot, especially when the caller asks questions and asks for information that someone from a legitimate organization would not.*100028 *

However, now imagine that the voice on the other end of the phone is not a stranger, but sounds exactly like a friend or loved one. This injects a whole new level of complexity and panic into the unfortunate recipient.

A recent story reported by CNN highlights an incident in which a mother received a call from an unknown number. When she answered the phone, she was her daughter. The daughter had allegedly been kidnapped and was calling her mother for ransom

In fact, the girl was safe and sound. The scammers had faked his voice.

This is not an isolated incident, and the scam has been encountered with variations, including an alleged car accident, in which the alleged victim calls his family to ask for money to get over the accident.

Getty ImagesExperts advise people to be very alert and not make hasty decisions when receiving an unexpected call.Old trick with new technology

This is not a new scam per se same. The term virtual kidnapping scam has been around for several years. And it can take many forms, but one of the most common is tricking victims into paying a ransom to free a loved one they believe is threatened.

The scammer tries to make an unconditional demand and get the victim pay the ransom fast before they find out they were tricked.

However, the emergence of powerful and readily available AI technologies has upped the ante significantly and made things more personal.

Hanging up on an anonymous caller is one thing, but it takes a lot of trust to hang up on someone who sounds like a child or partner.

There is software that can be used to identify fakes and which creates a visual representation of the audio called a spectrogram. When you are listening to the call, it may seem impossible to distinguish it from the real person, but the voices can be differentiated when the spectrograms are analyzed side by side.

At least one group has offered detection software that is can be downloaded, though such solutions may still require some technical knowledge to use.

Getty ImagesSome companies are developing software that allows you to compare real voices and those suspected of being generated by AI.

Most people they won’t be able to generate spectrograms, so what can you do when you’re not sure if what you’re hearing is real? As with any other form of communication, be skeptical.

If you receive an unexpected call from a loved one asking for money or making requests that seem out of place, call them back or send them a text to confirm that you’re really talking to him.

As the capabilities of the AI ​​expand, the lines between fact and fiction become more blurred. And we’re not likely to be able to stop that technology. This means that people will have to become more cautious.

*Oliver Buckley is Associate Professor of Cybersecurity at the University of East Anglia (UK) and has a degree in Computing and Computer Science from Liverpool Universities and from Welsh.

*This article was published on The Conversation and reproduced here under the creative commons license. Click here to read the original version.

BBC

Remember that you can receive notifications from BBC Mundo. Download the new version of our app and activate them so you don’t miss out on our best content.

Do you already know our YouTube channel? Subscribe!Technologyartificial intelligence

más populares