Adverse Action Notices for AI Credit Decisions: An ECOA Reg B Playbook
Why This Topic Matters Now
The CFPB's Consumer Protection Circular 2023-03 settled an argument the industry had been having quietly for two years: a lender using a complex AI or ML credit model still has to give the borrower the specific, accurate principal reasons for denial. The bureau's August 2024 comment letter on AI in financial services reinforced it. If your scorecard is a gradient-boosted tree with 1,400 features, "credit history" is not a reason. Neither is "score below cutoff."
We work with banks and non-bank lenders deploying AI agents in origination and servicing, and adverse action is where most projects stall. So this post is the playbook we use.
What Reg B Actually Requires
ECOA, implemented through Regulation B (12 CFR 1002.9), requires creditors to deliver an adverse action notice within 30 days of an unfavorable credit decision. The notice has to disclose the specific principal reasons — up to four — that drove the decision. The CFPB's Sample Form C-1 has a checklist of reasons, and the bureau has now said in plain language: you cannot pick a generic checkbox if it is not the actual reason. If your AI model declined the application because of a debt-to-income ratio swing in the last 90 days, "delinquent past or present credit obligations" on the sample form is wrong.
The other requirement that gets missed: the reasons must be specific. "Length of employment" is on the form. "Length of employment less than 12 months" is the actual reason. The first one is generic. The second one is what Reg B asks for.
Where AI Models Break Adverse Action
Three failure modes show up repeatedly:
- The model gives a probability score with no per-feature attribution, so the compliance team back-fits a reason from the application data. That is reason-code laundering, and it is what the CFPB called out.
- The model uses proxies (zip code interactions, device type, transaction patterns) that are technically predictive but cannot be explained in language a borrower understands or that a fair-lending team can defend.
- The shadow-deployment phase generates explanations from a different model than the one making decisions, so the reasons drift from the actual driver.
A Workable Architecture
Here is the architecture we deploy. It is not the only way, but it is auditable and it survives a fair-lending exam.
1. Use an interpretable production model where you can
A monotonic gradient-boosting model with constrained features is a defensible choice. So is a generalized additive model (GAM) for the underwriting layer, with a small ML layer for fraud or income verification. The point is: the model that issues the decision should be the model that produces the reasons. SHAP on a black box gives you a story; an interpretable model gives you the actual driver.
2. Generate top-N feature contributions per decision
For every declined application, persist the ranked feature contributions, the borrower-facing reason text, the regulatory category from the bank's reason-code dictionary, and the model version. We store this as a structured artifact attached to the decision record so the audit pack is complete on day one.
3. Map model features to plain-English reasons
This is the work most teams underinvest in. Build a reason-code dictionary that maps each model feature (or feature group) to a borrower-readable explanation and a Reg B category. Have your fair-lending counsel sign off on the mapping. Update it whenever you retrain. We help banks build this dictionary from policy documents and existing reason-code lists, and it cuts review time by roughly 70% in the projects we have run.
4. Hold a SCAN — Specific, Causal, Accurate, Non-discriminatory — review
Before any reason ships in a notice, run it through four checks. Is the reason specific to this applicant? Is it causal in the model (not just correlated)? Is it accurate against the application data? Is it free of disparate-impact concerns? Failed reviews route to a human credit officer.
What the AI Agent Does in the Loop
Our agents are deployed to do the parts that scale poorly with humans. For adverse action, that means:
- Pulling the model's feature contributions from the decision log
- Mapping them to the reason-code dictionary and selecting the top three or four
- Drafting the borrower notice in the language of preference, with the FCRA disclosures attached if a credit report was used
- Routing the draft to a credit officer for sign-off when the decision is in a flagged segment (small business, fair-lending sensitive zip codes, or a model-confidence band that is below threshold)
Every step writes back to the decision record so the borrower file, the model card, and the audit pack stay in sync.
Common Mistakes to Avoid
Three patterns we see fail in exams:
- Using SHAP values on the production model but writing reasons that match a different (interpretable) surrogate model. Pick one and stick to it.
- Listing four reasons because the form has four checkboxes. Reg B says principal reasons. If three are principal and the fourth is filler, drop the fourth.
- Skipping the FCRA "right to a free file disclosure" and credit-score notice when a credit report was used. Reg B and FCRA notices are different and both apply.
Implementation Timeline
A reasonable timeline for adding compliant adverse action to an AI credit decisioning stack is six to ten weeks: two for the reason-code dictionary, two for the explanation pipeline and feature attribution, two for QA and SCAN tooling, and the rest for validation against historical declines. We run the pilot against the prior 2,000 declines so the model output can be compared to what humans wrote.
What Your Audit Pack Looks Like
When the examiner asks, you should be able to hand over: the model card and validation report, the reason-code dictionary with version history, a CSV of every adverse action issued in the period with the model version, top features, reasons given, language, and timestamp, and the QA sample with SCAN results. If you cannot produce this in a day, your AI program is not ready for production credit decisions.
Pranay Shetty
CEO & Co-Founder