[ad_1]
As AI technology advances, the rise of deepfakes poses an ever-evolving threat. These manipulated images, videos, and audios use artificial intelligence to create convincing but false representations of people and events.
Of particular concern is voice spoofing, also known as voice cloning, which uses AI to create a realistic-sounding recording of someone’s voice. Fraudsters have used voice deepfakes to replicate familiar voices, such as a relative or a bank representative, tricking consumers into parting with money or providing sensitive information.
In one recent incident, scammers tricked a couple of grandparents into thinking their grandson was locked in prison and needed money for bail, using a replica of his voice to plead for help.
“We were sucked in,” the poor grandma told The Washington Post. “We were convinced that we were talking to Brandon.”
How do you protect yourself against such sophisticated trickery?
“Consumers should be cautious of unsolicited calls saying a loved one is in harm or messages asking for personal information, particularly if they involve financial transactions,” says Vijay Balasubramaniyan, co-founder and CEO of Pindrop, a voice authentication and security company that uses artificial intelligence to protect businesses and consumers from fraud and abuse.
He offers these five signs that the voice on the other end may be AI.
Related: How Deepfake Tech Could Affect the Journalism Industry
Look for long pauses and signs of a distorted voice
Deepfakes still require the attacker to type sentences that are converted into the target’s voice. This often takes time and results in long pauses. These pauses are unsettling to the consumer especially if the request on the other end is urgent and has a lot of emotional manipulation.
“But these long pauses are tell-tale signs of a deepfake system being used to synthesize speech,” says Balasubramaniyan.
Consumers should also listen carefully to the voice on the other end of the call. If the voice sounds artificial or distorted in any way, it could be a sign of a deepfake. They should also be on the lookout for any unusual speech patterns or unfamiliar accents.
Be skeptical of unexpected or out-of-character requests
If you receive a phone call or message that seems out of character for the person you know or the organization contacting you, it could be a sign of a deepfake attack. Especially if you are subjected to emotional manipulation and high-pressure tactics that are trying to compel you to help the caller, hang up and independently call back the contact using a known phone number.
Verify the identity of the caller
Consumers should ask the caller to provide personal information or to verify their identity using a separate channel or method, such as an official website or an email. This can help to confirm that the caller is who they claim to be and reduce the risk of fraud.
Stay informed about the latest deepfake technology
Consumers should keep up-to-date with the latest developments in voice deepfake technology and how fraudsters use it to commit scams. By staying informed, you can better protect yourself against potential threats. The FTC lists the most common phone scams on their website.
Invest in liveness detection
Liveness detection is a technique used to detect a spoof attempt by determining whether the source of a biometric sample is a live human being or a fake. This technology is offered by companies such as Pindrop and others to help companies detect whether employees are speaking to a real human or a machine pretending to be one.
“Consumers also need to ensure they do business with companies that are aware of this risk and have taken steps to protect their assets with these countermeasures,” says Balasubramaniyan.
[ad_2]
Source link