Online romance scams reach new levels of deceit with artificial intelligence (AI), turning hopeful hearts into targets for sophisticated fraud.
According to theĀ Federal Trade Commission,Ā romance scams have caused losses of more than $1.14billion with the average loss per survivor is $2,000, making them the most expensive imposter scam out there.
Ahead of Valentineās Day, Dan Holmes, senior fraud and identity expert at Feedzai, a cloud platform for managing financial risk, warns that these AI-enhanced scams are not only becoming more common but also more convincing.

Widowed for 11 years, Debbie Fox had built a ārobust and fullā life as an avid traveler, cultural enthusiast, small business owner and dedicated community member.
Yet one area remained unfulfilled ā her love life. Determined to change that, Fox joined an online dating platform, hoping to find companionship and genuine connection.
She soon met Russell ā a man whose profile resonated with her passions.
āIt felt very natural, very organic,ā Fox recalls. āI thought he was truly interested in my life, my day-to-day, and even my past. It was wonderful to feel seen and heard.ā
After chatting on the app for a few days, Fox and Russell moved their conversation to phone calls and video chats. Their daily, hour-long conversations created a strong bond. Fox believed she had finally found the one.
Just as their connection deepenedā, Russell dropped a bombshell: he was in trouble and needed money. āHe asked for $58,000 in total,ā Fox explains. āHe said, āIām really embarrassed. Iām not accustomed to needing help from anyone. I hope you or anyone else in your circle doesnāt think poorly of me, but Iām really in a jam.āā
Initially, Fox declined. āI wasnāt a woman of means, and I didnāt have the wherewithal to help,ā she says. Soon, Russell returned with a more persuasive request, complete with alleged legal documentationāa loan agreement drafted by an attorney. Convinced by the paper trail and his apparent vulnerability, Fox eventually agreed.
It wasnāt long before the truth emerged. Today, Fox knows that āRussellā isnāt his real name. While she remains uncertain about the fabricated details of his personality ā whether he truly loved travel or other cultures ā she now understands that he was running a sophisticated scam operation.
So, how can a criminal so convincingly create a fake identity?
Generative AI is transforming fraud by enabling criminals to automate, scale, and refine their attacks with unprecedented sophistication and by lowering barriers to entry. Fraudsters can use AI to develop highly convincing scams, deceiving even the savviest people and businesses.
As a result, the overall risk of an already significant fraud problem is escalating quickly. Deloitte is forecasting a rapid acceleration of fraud losses as a result of Gen AI becoming mainstream.
Here are some examples of how criminals are exploiting Gen AI for their gain:
Deepfake voice and video scams: AI-generated deepfake voices and videos are being used to impersonate executives, bank representatives, or family members, convincing victims to transfer money or reveal sensitive information.
Personalised phishing attack: AI-generated emails, texts, and chat messages mimic actual communication styles, making phishing scams more believable and removing the red flags banks had previously wanted consumers to look out for.
Synthetic identities: Fraudsters use AI to generate realistic synthetic identities to open fraudulent accounts. These accounts can then be used to make purchases directly or launder fraudulent funds.
Automated scam calls: AI chatbots can conduct real-time scam calls, mimicking human conversation and adapting responses dynamically to manipulate victims.
AĀ common misconception is that AI is a new tool, but in fact, banks have been using AI, particularly machine learning, to fight fraud for years. However, Gen AI and its associated threats are new.
To counteract Gen AIās negative effects, banks are exploring the potential of Gen AI to better protect consumers. Early adoption has focused on improving operational efficiency, allowing banks to streamline fraud management and redeploy resources to higher-value tasks.
In the long term, how we will help protect people from Gen AI scams is by collaborating through shared intelligence between banks and tech providers, whose platforms often facilitate fraud. Two common goals: enhancing consumer protection and safeguarding society as a whole.
The post From Virtual Love to Real Losses: The Rising Threat of AI-Fueled Romance Scams appeared first on The Fintech Times.