AI Models can Flag Anomalies, but Still Struggle to Read Human Deception

Real-time payments settle in seconds – faster than fraud models can think. With most authorized push payment scam losses in Australia occurring over instant payment rails, artificial intelligence models built to spot unusual patterns often miss the one thing they can’t predict: human deception.
See Also: AI Firewalls Combat Emerging API Attacks in Real Time
Australia’s APP scam losses in 2023 were estimated at $796 million and are projected to reach around $1.15 billion by 2028. Scammers are now using social engineering to persuade victims to make transfers that look perfectly normal to the AI, so the system doesn’t flag them as suspicious. “Traditional models are built to spot statistical outliers in payment behavior, but social engineering doesn’t always trigger those anomalies,” Dali Kaafar, founder and CEO of Apate.AI, told Information Security Media Group. “Scammers often coach victims to behave ‘normally,’ making the transaction slip through untouched.”
The core limitation of AI-driven fraud detection is that it can’t detect intent “and prey on trust by deceiving individuals into willingly transferring funds to them.” Current models are trained to identify external compromise, which includes unauthorized access, stolen credentials and mule accounts – instances where a fraudster directly initiates the transaction. APP scams invert that logic: the customer authorizes the payment, believing it’s legitimate and the algorithm sees nothing amiss.
AI systems deployed across Australian banks rely primarily on supervised learning architectures that process large volumes of transactional data such as amounts, frequency, device fingerprints, account age, payee history and velocity metrics. These features help models classify whether an event deviates from a user’s typical pattern.
But this approach falters when deception happens before the actual money transfer. The fraud signal resides not in the transaction metadata, but in the pre-transaction interaction, which includes the phone call, text or chat where the victim is manipulated into approving the transaction.
Today’s fraud detection systems act after the fact – looking for suspicious patterns in transactions, not the human manipulation that leads to them. This means that machine-learning models can miss entire classes of fraud where the transaction itself conforms to expected behavior. A transfer to a new payee doesn’t look risky if the victim was persuaded that the account belongs to their child or a bank official.
With fraudsters using generative AI to create deepfakes, synthetic identities and automated scam scripts, banks must respond with equally adaptive defenses using AI, said Gabriel Steele, strategic advisor at digital identity provider Daon. This includes embedding deepfake and injection-attack detection within know-your-customer processes and integrating identity-risk intelligence across the transaction lifecycle, he told ISMG.
Researchers are experimenting with behavioral biometrics and conversational AI as pre-payment detection layers to overcome this blind spot. Some scam solutions companies operate voice and text bots that engage scammers directly to collect language, tone and pressure-tactic data.
“That gives us a much richer dataset to train AI models that understand scam patterns from the source, not just the aftermath,” said Kaafar. Signals such as hesitation, delay between keystrokes or sentiment shifts can help identify moments of uncertainty in a digital interaction.
The intent profiling systems or advanced AI models look at the sequence and timing of actions to detect when behavior seems uncertain or abnormal. If the system notices that a user’s digital body language – their usual confident clicking, scrolling or typing rhythm – suddenly becomes hesitant or erratic, it may suggest they’re being manipulated by a scammer.
But even with improved modelling techniques, most fraud systems are constrained by data isolation. Each bank trains its algorithms on proprietary data, missing the broader context of scam networks.
Anurag Mohapatra, director of product marketing and fraud strategy at NICE Actimize, said this fragmentation directly limits detection accuracy. “Eighty-nine percent of confirmed fraud flows to beneficiaries that the sending bank has never seen before,” he told ISMG. “Without shared intelligence, each institution is blind to the fraud history visible elsewhere.”
Mohapatra said cross-institution data pooling can enable AI systems to spot mule networks up to 70 days earlier. But privacy and competition barriers make this difficult. Federated learning – where model parameters and not raw data are shared – offers a promising solution. It enables multiple banks to train on distributed datasets while keeping customer data local, improving model generalization without breaching confidentiality.
Mohapatra said AI’s role extends beyond detection to real-time behavioral intervention. Even when a system flags a likely scam, the harder task is convincing the customer to stop, he said. Contextual prompts, such as a warning that a payee has been linked to recent scam reports, can trigger hesitation and re-engage critical thinking, he said.
Modern fraud engines increasingly rely on reinforcement learning and champion-challenger frameworks to maintain performance. Trent Gunthorpe, general manager and head of Pacific at ACI Worldwide, said many leading banks now retrain models weekly using confirmed scam data, rather than quarterly. “This ensures models remain responsive to evolving threats,” he told ISMG.
Gunthorpe described multi-model orchestration as a system in which different AI engines are assigned to detect specific types of fraud, such as account takeovers or social engineering scams, with their outputs combined in real time. “Authorized doesn’t always mean willing,” he said. By combining behavioral analytics, device intelligence and linguistic signals, systems can begin to tell the difference.
Detection alone isn’t enough. Regulators and banks increasingly demand explainable AI frameworks to justify model decisions. “In high-stakes areas like scam prevention, we need more than black-box alerts,” Kaafar said.
Using methods such as Shapley Additive Explanations, which show how each input factor contributes to a model’s decision, and counterfactual reasoning, which explores what small changes would have flipped that outcome, banks can understand why an AI flagged a payment as risky. These techniques make AI reasoning visible, helping fraud teams adjust model thresholds, justify actions to regulators, and avoid biased or excessive interventions.
“Timing and deception can be modelled reasonably well with enough history,” Kaafar said, “but understanding the intent behind an action, especially when the victims believe they’re helping someone, remains deeply challenging.”













