The Pattern #134
AI in finance - Can we resolve the black box problem?

Srijan Nagar
·
May 8, 2025

In 2018, Federal Reserve Governor Lael Brainard made a concerning admission about artificial intelligence in finance: "Many AI tools and models develop analysis, arrive at conclusions, or recommend decisions that may be hard to explain... it is possible that no one, including the algorithm's creators, can easily explain why the model generated the results that it did." This statement wasn't alarmist; it was prophetic.
Six years later, we're witnessing this reality unfold. Even Dario Amodei, CEO of Anthropic (creator of the Claude AI), recently confessed: “ We do not understand how our own AI creations work ”. This stunning admission from one of AI's leading architects should alarm anyone relying on these systems for critical financial decisions.
Dario Amodei, CEO of Anthropic (Claude AI)
The Black Box Problem

Picture Credit: Black box AI models versus interpretable and explainable AI...
The "black box" problem refers to AI systems that make decisions through processes that remain opaque even to their creators. The dilemma is straightforward: if we cannot explain how AI reaches its conclusions, how can we possibly trust it?
This wasn’t just a theoretical concern. IBM's Watson for Oncology, once heralded as revolutionary for cancer treatment, failed spectacularly, largely because doctors couldn't understand or verify its recommendations. Now imagine similar failures in lending, credit scoring, fraud detection, or investment management .
Financial Compliance and the AI Crossroads
For fintech companies, the stakes are particularly high. Unlike big tech firms that can operate with experimental algorithms, financial institutions face strict regulatory scrutiny. Just the way SEBI's regulations require transparency in algorithmic trading systems, RBI's guidelines on digital lending emphasize that AI systems must provide clear justifications for credit decisions. These requirements aren't just bureaucratic hurdles but are essential safeguards in a sector where algorithmic failures can trigger market crashes or discriminatory lending practices.
What Makes Financial Services Different?
Finance is built on trust and clear cause-and-effect relationships. When a loan is denied, customers need a real explanation. When transactions are flagged as suspicious, banks must justify these decisions to regulators.

Picture Credit: The Truth About AI In Finance: Blind Faith in Black Boxes
Here's the key difference: humans can explain their thought process by pointing to specific reasons for decisions. AI, however, draws general conclusions from specific patterns in training data. It doesn't establish clear cause-and-effect relationships like our financial regulations require; it simply says, “This looks like past patterns that led to outcome X." This fundamental difference and lack of interpretability creates an inherent conflict with financial regulation, which is built on the principle of causation: establishing a logical relationship between cause and consequence in each individual case.
The Regulatory Response:
Regulators worldwide are taking notice. In India, the Reserve Bank of India (RBI) has emphasized that AI applications must be transparent and explainable. The EU's comprehensive AI Act, which becomes applicable in 2026, categorizes credit scoring and similar financial applications as "high-risk" systems requiring human oversight.
The challenge is that human oversight, often touted as the solution, can become paradoxical. In many cases, human intervention defies the very purpose of using AI. As Finance Watch notes, "The human in the room has found its limit" when facing systems processing millions of transactions daily.
Explainability Without Sacrificing Accuracy
It's a common misconception that we must choose between powerful AI and explainable AI. This is simply not true. Take lending decisions, for example. When a bank uses AI to evaluate loan applications, customers deserve to know why they were approved or denied. Was it their credit history? Income level? Debt-to-income ratio?
The good news is that hybrid models can maintain high accuracy while providing clear explanations. These approaches combine simpler, interpretable algorithms with more complex ones, giving us the best of both worlds. A slightly less accurate model that can clearly explain its decisions is far better than a black box that leaves your business exposed to regulatory scrutiny and customer distrust.
The Path Forward: Beyond Explainable AI.
While Explainable AI (XAI) is promising, just explaining AI's reasoning isn't enough. If trained on flawed data, an AI system might explain its reasoning perfectly while still producing biased or incorrect outcomes.
The solution isn't abandoning AI in finance but recognizing its limitations. Here's what responsible FinTech companies should consider:
Appropriate Application: AI is not required for every financial decision. Reserve it for areas where traditional methods are genuinely insufficient.
Data Governance: An AI is only as good as its data. Implementing rigorous data governance processes is essential.
Regulatory Compliance by Design: Build systems with regulation in mind from the start, not as an afterthought.
Maintaining Human Judgment: Use AI as decision support rather than decision replacement, especially for high-stakes financial decisions.
The Bottom Line
The financial industry stands at a crossroads. While AI promises enormous efficiency gains and increased profits, these cannot come at the expense of accountability, transparency, and fairness. AI is here to stay... the job now is to take AI's benefits and minimize its evils.
The FinTech companies that will lead aren't those racing to adopt AI the fastest, but those implementing it most responsibly, recognizing that in finance, explaining "how" and "why" isn't just good practice; it's essential to maintaining the trust that the entire system depends on.
Interested in delving deeper? Here are a few interesting reads for the fortnight-
AI Transparency in Finance - Understanding the Black Box - Emerj A rtificial Intelligence Research
Pitfall of Black Box AI at Banks: Explaining Your Models to Regulators
Case Study 20: The $4 Billion AI Failure of IBM Watson for Oncology - Henrico Dolfing
Until next time!
Cheers,
Srijan Nagar
FinBox
All opinions expressed are my own and do not necessarily reflect the views of FinBox or its promoters.
Powering Credit Infrastructure at Scale
© 2025 Moshpit Technologies, Inc. All rights reserved.
Risk Management
Identity Verification
Solutions
Products
Resources