/dq/media/post_banners/wp-content/uploads/2023/07/Deep-Fake-Scams-on-the-Rise.jpg)
People are being taken advantage of by deep fake scam calls that mimic the voices of people you might know using artificial intelligence. These deep fake scam calls employ so-called generative AI, which describes systems that may produce text, images, or any other type of media, including video, based on input from a user. Several high-profile events, such as the use of actress Emma Watson's likeness in a slew of suggestive advertisements that appeared on Facebook and Instagram, have helped deep fake gain prominence over the past several years.
The ability to produce an audio deep fake—a convincing imitation of a person's voice—is now becoming more widely used. You need data to train the algorithm to reproduce someone's voice accurately. Having numerous audio recordings of your chosen target's voice is necessary. The algorithms will produce a better and more believable replica the more examples of the person's voice you can provide.
Many of us already divulge information about our daily lives on the internet. This suggests that social media may easily access the audio information needed to produce a convincing voice replica. But what occurs when a copy is published?
What could go wrong?
Anyone possessing the data might force "you" to say whatever they wanted, thanks to a deepfake algorithm. Writing some text and asking the computer to read it in your voice can accomplish this in practise relatively easily.
Major obstacles
- This capacity runs the risk of making audio inaccuracy and disinformation more common. As evidenced by the "videos" of Zelensky, technology can be utilised to try to sway public opinion on a global or national scale.
- But the widespread use and accessibility of these technologies also provide substantial local issues, particularly given the rising popularity of "AI scam calls." Many of us have gotten a phishing or scam call that demands that we log in right away because our computer has been compromised, potentially granting the caller access to our data.
- When the caller makes requirements that someone from an actual organisation would not, it is frequently straightforward to determine that this is a scam. Imagine instead that the person on the other end of the phone sounds just like a friend or loved one rather than just a stranger. This adds a new level of difficulty and anxiety for the unfortunate recipient.
- A recent CNN piece highlights an example of a mother receiving a call from an unidentified number. She recognised her daughter when she picked up the phone. After being abducted, the daughter allegedly dialled her mother to deliver a ransom demand.
- The girl was, in fact, unharmed. Her voice had been deep-faked by the con artists. This is common; other scam variations involve the victim calling their family for financial assistance following a fictitious vehicle accident.
Deep fake scam is an old trick with modern technology
The phrase "virtual kidnapping scam" has been used for a while; thus, this con is not new. It can take many different forms, but a typical strategy is to con victims into paying a ransom to release a loved one they think is in danger.
The con artist attempts to develop unquestioned surrender to get a speedy ransom payment from the victim before the fraud is revealed. However, the emergence of robust and accessible AI technology has considerably raised the stakes - and made things more personal. Hanging up on an anonymous caller is one thing, but doing so when the caller sounds exactly like your child requires a lot of confidence in your judgement.
A spectrogram, a visual representation of the audio, is produced by software that can be used to detect deep fakes. Voices may be recognised when spectrograms are compared side by side, despite it being difficult to tell them apart while listening to the call and the person speaking. At least one organisation has made detection software available for download, albeit it could still be necessary to have some technical expertise to use such products.
What can you do if you are unsure whether your hearing is genuine because most people won't be able to make spectrograms? Be sceptical, as you would with any other media type, if you encounter it. Call them back or text them to ensure you're speaking to them if you get an unexpected call from a loved one and they beg for money or make requests that seem out of the ordinary.
The distinction between fact and fiction will become increasingly hazy as AI's capabilities advance. We won't be able to return the technology in the box, either. People will therefore need to exercise greater caution.