The promise of Artificial Intelligence in financial compliance is seductive: instant answers, zero marginal cost, and the end of manual drudgery. But as a recent high-profile case in Australia demonstrates, when speed trumps scrutiny, the cost is far higher than the initial savings.
Deloitte recently agreed to partially refund the Australian government for a report that cost over $440,000. The reason? The report, generated with the assistance of AI, was riddled with “hallucinations” that went far beyond simple errors.
The AI didn't just misinterpret data; it invented it. The report cited research papers that did not exist, attributed findings to "fake professors," and even fabricated details regarding a court judgment (Deanna Amato v Commonwealth). It referenced specific paragraphs and quotes from the judge that were never written.
For Financial Institutions (FIs) and Payment Platforms, this story is more than a headline; it is a warning. In the high-stakes world of Third-Party Risk Management (TPRM) and Fintech Due Diligence, a hallucination isn’t just embarrassing - it’s a regulatory violation waiting to happen. If an AI can invent a court ruling, it can certainly invent a compliant AML policy where none exists.
The allure of AI is its ability to process vast amounts of data instantly. In risk assessment, where compliance teams are buried under mountains of merchant documentation and KYB data, the temptation to let an algorithm "handle it" is real.
However, the Deloitte incident highlights a critical flaw in pure automation: AI lacks judgment.
AI models are probabilistic, not analytical. They predict the next likely word; they do not understand the implication of a missing AML policy or the nuance of a complex ownership structure. When a function as critical as risk assessment is left solely to a "Black Box," you invite two major failures:
Inefficient, slow, manual processes are the enemy of growth. But unmonitored automation is the enemy of compliance.
At Across, we believe the future of risk assessment isn't about choosing between human expertise and AI speed. It’s about integrating them.
We differ fundamentally from basic document-sharing tools or fully automated "risk scoring" bots. Our value proposition is built on a Human-in-the-Loop architecture that ensures accuracy without sacrificing efficiency.
We utilize advanced technology to execute well-defined Standardized Operating Procedures (SOPs). Our platform aggregates data, structures the assessment framework, and handles the heavy lifting of data collection. This is how we reduce the assessment timeline from the industry average of 3-6 months down to just 2 weeks.
This is our "Anti-Feature"—we are not just software. We act as consultants.
Once the data is aggregated, Across deploys a team of expert analysts to review and QA the assessment.

Our analysts validate the findings against our quantitative framework:
An AI might see a document labeled "AML Policy" and check a box. An Across analyst reads the policy to ensure it actually defines key thresholds relevant to your risk appetite.
The industry has long believed that deep, comprehensive risk reports require hiring an army of internal staff. The alternative was accepting the shallow, often inaccurate output of automated tools.
Across proves there is a third way. By combining standardized frameworks with human oversight, we deliver:
The lesson from the $440k error is clear: Technology is an accelerator, not a replacement for expertise. At Across, we use AI to move fast, but we use experts to ensure we never move in the wrong direction.
© 2026 Across Technology Inc. All Rights Reserved