Indian Banks Are Betting Big on AI Lending Models—But Governance Lags Behind


In the last eighteen months, every major Indian bank has either deployed or announced plans to deploy AI-driven credit underwriting models. SBI, HDFC Bank, ICICI Bank, and Axis Bank have all integrated machine learning into retail lending decisions. Several mid-tier banks, including Federal Bank and IndusInd, are running pilot programs for MSME lending powered by alternative data scoring.

The pitch is straightforward. Traditional credit scoring relies on a limited set of variables—income documentation, CIBIL score, existing debt, employment history. AI models can ingest hundreds of additional data points: UPI transaction patterns, GST filing regularity, utility payment history, even the consistency of a business’s digital footprint.

The results, at least initially, are impressive. Banks using AI underwriting report 30-40% faster loan processing times and, according to early portfolio data, marginally lower default rates in the first 12 months. But there’s a growing gap between the pace of deployment and the maturity of governance frameworks around these models.

What the Models Actually Do

The typical AI lending model deployed by Indian banks isn’t a black box making autonomous decisions—at least not yet. Most implementations use a hybrid approach where the AI model generates a risk score or recommendation, and a human underwriter makes the final call.

But the influence of the AI score on that human decision is substantial. When a model flags an applicant as high-risk, underwriters rarely override it. When the model gives a green light, the human review becomes cursory. In practice, the model is making the decision even when technically a human signs off.

The data inputs vary by bank and loan product. For retail personal loans, banks are primarily using bureau data enhanced with UPI transaction analysis and bank statement parsing. For MSME lending, the inputs are broader: GST returns, trade receivable data from TReDS platforms, e-commerce seller metrics, and in some cases social media presence and web traffic data.

This last category—social and digital footprint data—is where the governance questions get sharp.

RBI’s Stance Is Tightening

The RBI has been cautiously permissive about AI in lending. The 2023 guidelines on Digital Lending established that the ultimate lending decision must rest with the regulated entity, not the technology provider. The 2024 draft framework on AI/ML in financial services went further, requiring model explainability for customer-facing decisions.

In January 2026, the RBI issued a circular requiring banks to maintain a model risk management framework for all AI-driven credit decisions. The key requirements: documented model validation processes, regular bias testing, and the ability to explain to customers why a loan was rejected in terms they can understand.

That last requirement—explainability—is the hardest for banks to operationalise. Many of the ML models in use are ensemble methods or neural networks where the relationship between inputs and outputs isn’t linearly traceable. Telling a customer “our model assessed 247 variables and determined your risk score was below threshold” doesn’t satisfy the spirit of the RBI’s explainability mandate.

Specialists in AI strategy for business, such as the team at Team400, have noted that model governance is where most organisations struggle—not because they lack the technical capability, but because governance requires cross-functional collaboration between data science, legal, compliance, and business teams that most banks aren’t structured to support.

The Bias Problem Nobody Wants to Talk About

AI models learn from historical data. In the Indian banking context, historical lending data reflects decades of systemic biases. Certain geographies, communities, and business types have been historically underserved by formal credit—not because they’re inherently riskier, but because they lacked access to the formal financial infrastructure that generates creditworthy data.

When you train a model on data that reflects these patterns, the model can perpetuate them. A small business owner in a Tier 3 city with a thin credit bureau file but a healthy UPI transaction history might still get flagged as high-risk because the model learned that thin-file applicants from similar geographies have higher default rates.

Some banks are actively working on bias detection and correction. HDFC Bank’s data science team has published research on fairness-aware lending algorithms. SBI has hired specialists focused on ensuring that financial inclusion gains from Jan Dhan and UPI aren’t undermined by algorithmic exclusion.

But these efforts are nascent. There’s no industry-wide standard for bias testing in Indian banking AI models, and the RBI’s current guidelines suggest rather than mandate specific testing methodologies.

Where This Goes

The direction is clear: AI will play an increasingly central role in Indian bank lending decisions. The potential benefits—faster processing, broader inclusion through alternative data, lower operational costs—are too significant for banks to ignore.

But the governance infrastructure needs to catch up. Banks that move fast on deployment without investing equally in model risk management, bias testing, and explainability frameworks are building on a foundation that won’t survive the next wave of regulatory scrutiny.

The Indian Banks’ Association has formed a working group on AI governance standards, expected to publish recommendations by mid-2026. Whether those recommendations have teeth—or become another advisory document that banks acknowledge and quietly shelve—will shape the trajectory of AI lending in India for the next decade.