As technology advances, so do the techniques used by fraudsters. Phishing emails, fake links, Niger scams, and other nasty stuff can chapter the tarnished image of this convenience for most people.
Still, a newer, scarier concept is AI spam voice calls to scam people, especially people who use Gmail. However, surveys and statistics show that it’s not the email itself or a badly-spelt link that gets users. Better yet, why are users complacent about emails and rely solely on a Gmail spam filter? What is AI voice spam, and why should Gmail users care?
An AI voice fraud is a widespread scam where criminals impersonate someone’s voice, usually a business partner or parent, with the assistance of artificial intelligence. In most cases, this fraud is used to convince people to disclose personal information, carry out some transaction, or click on links.
Let us envision a scenario where a family member or a close friend calls us, asking for urgent assistance and claiming to be in trouble. Money or account details are most likely what the caller asks for. The question arises: how can a loved one call and ask for these details? The voice turns out to be from a scammer rather than that of a loved one. Thanks to AI voice reproducing technology, it is possible for scammers to impersonate a person’s voice.
Reports estimate Gmail users are more vulnerable since their accounts contain large amounts of personal information that are more easily vulnerable to fake scammers and hijackers. People need to ask themselves: do you email or forward phone numbers, passwords, or even audio messages to people? For example, they need to find a data stick that enables them to have voice simulations that are put together with AI to imitate others.
AI voice scams have increased in prominence in the tasks of an online fraudster. Over the past two years, the number of reported losses in AI-related frauds, including voice trolling, increased by 400%, as the Federal Trade Commission observed in September 2023. As happens at the start of any new technology, these losses follow the growth of AI and its application possibilities. Scammers will deal with scams where they could not go back some time ago.
Technology got to a place where a computer can imitate the very small intonations in speech - the variants of voice, emotions even, and those further complicate the process of figuring out whether or not the person you are talking to or is being represented is real. That being said, it still wouldn't be easy to tell apart a human being from an artificial voice, which can precisely imitate a person’s speech patterns in their entirety.
As much as AI voice trolling is a worrying trend, Gmail users can take steps to secure themselves. First and foremost, one step is to utilize a two-step verification approach to secure strong or unusual, unique passwords. \
Should one ever get a strange call from someone professing to know you, make sure you validate their identity through a different means, such as a text or video call.
Finally, AI voice scams are a new form of cybercrime, made possible by advanced technology. People can commit the most convincing impersonation of an individual. Most importantly, Gmail users should remain alert and secure their accounts so they do not become targets of these new advanced scams.