Cybersecurity News Hub
No Result
View All Result
  • Home
  • Cyber Crime
  • Cyber Security
  • Data Breach
  • Mobile Security
  • Videos
  • Advertise
  • Privacy Policy
  • Contact Us
  • Home
  • Cyber Crime
  • Cyber Security
  • Data Breach
  • Mobile Security
  • Videos
  • Advertise
  • Privacy Policy
  • Contact Us
No Result
View All Result
Cybersecurity News Hub
No Result
View All Result
Home Data Breach

Why AI Still Fails to Catch ‘Authorized’ Scams

Cyberinchief by Cyberinchief
November 6, 2025
Reading Time: 4 mins read
0
Why AI Still Fails to Catch ‘Authorized’ Scams


AI Models can Flag Anomalies, but Still Struggle to Read Human Deception

Prajeet Nair (@prajeetspeaks) ,
Rashmi Ramesh (rashmiramesh_) •
November 6, 2025    

Why AI Still Fails to Catch 'Authorized' Scams
Image: Shutterstock

Real-time payments settle in seconds – faster than fraud models can think. With most authorized push payment scam losses in Australia occurring over instant payment rails, artificial intelligence models built to spot unusual patterns often miss the one thing they can’t predict: human deception.

RELATED POSTS

UK Hospital Asks Court to Stymie Ransomware Data Leak

These five countries recorded the most third-party data breaches last year

LockBit 5’s “new secure blog domain” infra leaked already – DataBreaches.Net

See Also: AI Firewalls Combat Emerging API Attacks in Real Time

Australia’s APP scam losses in 2023 were estimated at $796 million and are projected to reach around $1.15 billion by 2028. Scammers are now using social engineering to persuade victims to make transfers that look perfectly normal to the AI, so the system doesn’t flag them as suspicious. “Traditional models are built to spot statistical outliers in payment behavior, but social engineering doesn’t always trigger those anomalies,” Dali Kaafar, founder and CEO of Apate.AI, told Information Security Media Group. “Scammers often coach victims to behave ‘normally,’ making the transaction slip through untouched.”

The core limitation of AI-driven fraud detection is that it can’t detect intent “and prey on trust by deceiving individuals into willingly transferring funds to them.” Current models are trained to identify external compromise, which includes unauthorized access, stolen credentials and mule accounts – instances where a fraudster directly initiates the transaction. APP scams invert that logic: the customer authorizes the payment, believing it’s legitimate and the algorithm sees nothing amiss.

AI systems deployed across Australian banks rely primarily on supervised learning architectures that process large volumes of transactional data such as amounts, frequency, device fingerprints, account age, payee history and velocity metrics. These features help models classify whether an event deviates from a user’s typical pattern.

But this approach falters when deception happens before the actual money transfer. The fraud signal resides not in the transaction metadata, but in the pre-transaction interaction, which includes the phone call, text or chat where the victim is manipulated into approving the transaction.

Today’s fraud detection systems act after the fact – looking for suspicious patterns in transactions, not the human manipulation that leads to them. This means that machine-learning models can miss entire classes of fraud where the transaction itself conforms to expected behavior. A transfer to a new payee doesn’t look risky if the victim was persuaded that the account belongs to their child or a bank official.

Buy JNews
ADVERTISEMENT

With fraudsters using generative AI to create deepfakes, synthetic identities and automated scam scripts, banks must respond with equally adaptive defenses using AI, said Gabriel Steele, strategic advisor at digital identity provider Daon. This includes embedding deepfake and injection-attack detection within know-your-customer processes and integrating identity-risk intelligence across the transaction lifecycle, he told ISMG.

Researchers are experimenting with behavioral biometrics and conversational AI as pre-payment detection layers to overcome this blind spot. Some scam solutions companies operate voice and text bots that engage scammers directly to collect language, tone and pressure-tactic data.

“That gives us a much richer dataset to train AI models that understand scam patterns from the source, not just the aftermath,” said Kaafar. Signals such as hesitation, delay between keystrokes or sentiment shifts can help identify moments of uncertainty in a digital interaction.

The intent profiling systems or advanced AI models look at the sequence and timing of actions to detect when behavior seems uncertain or abnormal. If the system notices that a user’s digital body language – their usual confident clicking, scrolling or typing rhythm – suddenly becomes hesitant or erratic, it may suggest they’re being manipulated by a scammer.

But even with improved modelling techniques, most fraud systems are constrained by data isolation. Each bank trains its algorithms on proprietary data, missing the broader context of scam networks.

Anurag Mohapatra, director of product marketing and fraud strategy at NICE Actimize, said this fragmentation directly limits detection accuracy. “Eighty-nine percent of confirmed fraud flows to beneficiaries that the sending bank has never seen before,” he told ISMG. “Without shared intelligence, each institution is blind to the fraud history visible elsewhere.”

Mohapatra said cross-institution data pooling can enable AI systems to spot mule networks up to 70 days earlier. But privacy and competition barriers make this difficult. Federated learning – where model parameters and not raw data are shared – offers a promising solution. It enables multiple banks to train on distributed datasets while keeping customer data local, improving model generalization without breaching confidentiality.

Mohapatra said AI’s role extends beyond detection to real-time behavioral intervention. Even when a system flags a likely scam, the harder task is convincing the customer to stop, he said. Contextual prompts, such as a warning that a payee has been linked to recent scam reports, can trigger hesitation and re-engage critical thinking, he said.

Modern fraud engines increasingly rely on reinforcement learning and champion-challenger frameworks to maintain performance. Trent Gunthorpe, general manager and head of Pacific at ACI Worldwide, said many leading banks now retrain models weekly using confirmed scam data, rather than quarterly. “This ensures models remain responsive to evolving threats,” he told ISMG.

Gunthorpe described multi-model orchestration as a system in which different AI engines are assigned to detect specific types of fraud, such as account takeovers or social engineering scams, with their outputs combined in real time. “Authorized doesn’t always mean willing,” he said. By combining behavioral analytics, device intelligence and linguistic signals, systems can begin to tell the difference.

Detection alone isn’t enough. Regulators and banks increasingly demand explainable AI frameworks to justify model decisions. “In high-stakes areas like scam prevention, we need more than black-box alerts,” Kaafar said.

Using methods such as Shapley Additive Explanations, which show how each input factor contributes to a model’s decision, and counterfactual reasoning, which explores what small changes would have flipped that outcome, banks can understand why an AI flagged a payment as risky. These techniques make AI reasoning visible, helping fraud teams adjust model thresholds, justify actions to regulators, and avoid biased or excessive interventions.

“Timing and deception can be modelled reasonably well with enough history,” Kaafar said, “but understanding the intent behind an action, especially when the victims believe they’re helping someone, remains deeply challenging.”





Source link

Tags: AuthorizedCatchFailsScams
ShareTweetPin
Cyberinchief

Cyberinchief

Related Posts

UK Hospital Asks Court to Stymie Ransomware Data Leak
Data Breach

UK Hospital Asks Court to Stymie Ransomware Data Leak

December 8, 2025
These five countries recorded the most third-party data breaches last year
Data Breach

These five countries recorded the most third-party data breaches last year

December 8, 2025
LockBit 5’s “new secure blog domain” infra leaked already – DataBreaches.Net
Data Breach

LockBit 5’s “new secure blog domain” infra leaked already – DataBreaches.Net

December 7, 2025
Rethinking the CIO-CISO Dynamic in the Age of AI
Data Breach

Rethinking the CIO-CISO Dynamic in the Age of AI

December 6, 2025
NHS supplier hit with £3m fine for security failings that led to attack
Data Breach

NHS supplier hit with £3m fine for security failings that led to attack

December 6, 2025
HHS Outlines AI Road Map Amid Major Department Overhaul
Data Breach

HHS Outlines AI Road Map Amid Major Department Overhaul

December 5, 2025
Next Post
About Cyber Crime#cybersecurity#youtube#short#shorts#antivirus#shortsfeed#cybercrime#cybercrime

About Cyber Crime#cybersecurity#youtube#short#shorts#antivirus#shortsfeed#cybercrime#cybercrime

Malware of the Future: What an infected system looks like in 2025

Malware of the Future: What an infected system looks like in 2025

Recommended Stories

ROADMAP: Como aprender Cybersecurity em 2025

ROADMAP: Como aprender Cybersecurity em 2025

November 10, 2025
“Cyber Crime से बचना है? ये ज़रूर देखें!” ⚠️😮 #shorts #shortsfeed #cybercrime #cybersecurity #fraud

“Cyber Crime से बचना है? ये ज़रूर देखें!” ⚠️😮 #shorts #shortsfeed #cybercrime #cybersecurity #fraud

November 11, 2025
Call center agent charged under cybercrime law

Call center agent charged under cybercrime law

October 22, 2025

Popular Stories

  • Allianz Life – 1,115,061 breached accounts

    Allianz Life – 1,115,061 breached accounts

    0 shares
    Share 0 Tweet 0
  • Prosper – 17,605,276 breached accounts

    0 shares
    Share 0 Tweet 0
  • साइबर अपराध | Illegal Payment Gateway & Rented Bank Accounts | MAMTA CHOPRA

    0 shares
    Share 0 Tweet 0
  • Miljödata – 870,108 breached accounts

    0 shares
    Share 0 Tweet 0
  • Snowflake Data Breach Explained: Lessons and Protection Strategies

    0 shares
    Share 0 Tweet 0

Search

No Result
View All Result

Recent Posts

  • Top 5 Mobile App Security Threats Leaders Must Prepare for in 2026
  • Microsoft On Women In Cybersecurity At Black Hat Europe 2025 In London
  • Polisi kembali ungkap sindikat Cyber Crime kejahatan Internasional – iNews Malam 09/03

Categories

  • Cyber Crime
  • Cyber Security
  • Data Breach
  • Mobile Security
  • Videos

Newsletter

© 2025 All rights reserved by cyberinchief.com

No Result
View All Result
  • Home
  • Cyber Crime
  • Cyber Security
  • Data Breach
  • Mobile Security
  • Videos
  • Advertise
  • Privacy Policy
  • Contact Us

© 2025 All rights reserved by cyberinchief.com

Newsletter Signup

Subscribe to our weekly newsletter below and never miss the latest News.

Enter your email address

Thanks, I’m not interested