TechnologyScammers use AI tools to challenge as victims' relatives.

Scammers use AI tools to challenge as victims’ relatives.

Alarming recent reports of a family losing thousands of dollars to con artists utilising artificial intelligence voice cloning technology. It is becoming more and more obvious that those with malicious intent can use artificial intelligence to take advantage of gullible victims.

Voice Cloning Fraud: The Growing Risk of AI Impersonation

According to the Fortune article, it is becoming easier to produce voices that are extremely convincing as artificial intelligence develops. With just a few seconds of starting audio, users can construct voices using A.I.-generated text-to-speech technologies.
Sadly, this technology can also be misused for evil, such as when someone’s voice is copied for financial gain. A prime example of the risk that voice cloning fraud poses is the case of Canadian Benjamin Perkin.

Fraudsters Now Using Artificial Intelligence to Create Realistic Voice Clones

Artificial intelligence opens up a world of new possibilities as technology develops, some of which include developments in the field of voice synthesis. This implies that it is possible to duplicate someone’s voice perfectly and use it to trick others with just a few seconds of audio.

The Washington Post recently ran a story on Benjamin Perkin, a Canadian guy whose elderly parents were duped into sending the con artists thousands of dollars, as a great example of this.

See also  WhatsApp is bringing a new feature, now group messages will be under the control of admin, know full details

Subscription-based Identity Verification to Combat Malicious A.I. Voice Cloning

They had been summoned by a “lawyer,” who informed them that their son was in custody for the vehicular homicide of an American diplomat and demanded money for legal expenses. The elderly parents were duped into believing it was their son seeking the money by the “lawyer,” who had cloned Perkins’ own voice.

The use of free, anonymous accounts has been connected to the majority of these harmful AI voice copying events. One business, ElevenLabs, has made the decision to halt this by introducing a new subscription-based plan that requires users to confirm their identity before using their voice synthesiser.

Verifying Senders of Suspicious Voicemails, Texts, and Emails

This will make it more difficult for con artists to use their synthesiser to convey false information because they won’t want to take the chance of being detected.

Being cautious while disclosing personal information is essential because voice cloning is a very harmful technology. Before providing any money or other sensitive information in response to any strange voicemails, texts, or emails, it is best to confirm the sender’s identity.

The Consequences of A.I. Misuse: An Examination of the Benjamin Perkin Case

Users and their families can avoid scams by being aware of new technology. The risk of misusing artificial intelligence must be considered because technology is developing quickly.

The loss scenarios that can result from voice cloning fraud are demonstrated by the example of Benjamin Perkin. It is critical that measures be implemented to ensure that businesses safeguard people from the risks presented by intrusive technology.

See also  Google workers hit back at CEO over layoffs: ‘Nowhere have workers’ voices been adequately been considered’

Stay connected with postvines for more information!!

Latest article

More article