"When a Model Gets It Wrong, Billions Are at Stake" – Abdelmadjid Laouedj on Building Credit Risk Tools for Large Banks

Abdelmadjid Laouedj
Written By:
Arundhati Kumar
Published on
Updated on

A Berkeley-trained quantitative researcher who developed quantitative approaches to align internal credit assessments with external benchmarks in line with Federal Reserve regulatory frameworks explains what happens when $655 billion in credit losses meet imperfect models

 According to S&P Global's "Global Credit Outlook 2026" report from December 2025, global bank credit losses are expected to rise 7.5% year-on-year to approximately $655 billion this year, with North American banks particularly exposed to tariff-sensitive sectors. When even small miscalculations in risk assessment translate into massive capital misallocation, the people who design credit risk models carry an outsized responsibility. Abdelmadjid Laouedj is a Quantitative Researcher at JPMorgan Chase, the world's largest bank by market capitalization. Within its Wholesale Credit Risk division, he develops predictive models for credit events, Loss Given Default forecasting, and builds regulatory alignment tools designed to support risk assessment and consistency with regulatory expectations. Laouedj holds a Master of Financial Engineering from UC Berkeley's Haas School of Business, a program ranked first worldwide by TFE Times every year since 2016. Before that, he graduated from CentraleSupélec, France's second-ranked engineering school. We spoke with Laouedj about credit risk modeling from the inside and what it takes to build tools that play a critical role in supporting risk assessment and ensuring consistency with regulatory expectations.

 Abdelmadjid, S&P Global projects global bank credit losses could hit $655 billion in 2026. For someone who builds credit risk models at JPMorgan every day, what does a number like that mean in practical terms?

 It means the margin for error in our work is essentially zero. When you're modeling the probability of a company defaulting on its obligations, or predicting how much a bank can recover if that default happens, you're dealing with numbers that feed directly into capital allocation decisions. A model that overestimates recovery rates by even a few percentage points can leave a bank underprepared for a stress scenario. In large financial institutions, wholesale credit portfolios involve large corporate clients rather than individual consumers, and the exposures are enormous. Every assumption about a client's financial health, every variable we include, has consequences that multiply across the entire portfolio. When a model gets it wrong, billions are at stake, and that S&P forecast only reinforces how real the pressure on the system is right now.

One of your projects at JPMorgan involves internal-to-external alignment of credit assessments, a framework that aligns the bank's internal credit ratings with widely recognized external rating agencies. Why can't a bank just use external ratings directly

Because external ratings don't capture everything we know about a client. Large financial institutions typically rely on internal rating systems that outside agencies may not have access to: financial ratios, transaction history, qualitative assessments from relationship managers, and market signals. But regulators, especially the Federal Reserve, need to see consistency between ourthe internal view and what recognized external agencies say about that same entity. I built quantitative models to reconcile those two perspectives – statistical modeling that identifies where and why the differences arise and highlights situations where internal assessments differ significantly from external ones.

You also contributed to an internal platform to support the generation of data-driven credit risk assessments across clients. How does that differ from a conventional credit rating?

A conventional rating, whether internal or external, tends to lean heavily on historical financial statements: balance sheets, income reports, and debt ratios. The internal rating system goes further by combining a company's own historical performance with broader financial market signals and historical data over time. Instead of just looking at what a company reported last quarter, we're also incorporating how markets are pricing its debt, how industry peers are performing, and qualitative insights combined with market-based signals. Bringing all of that into a single, consistent grade means risk managers and senior executives can compare clients across portfolios on equal footing. Before the implementation of such internal frameworks, different parts of the bank might have held slightly different views of the same client's risk profile, which created blind spots.

Before JPMorgan, you spent five months at Gustave Roussy, Europe's largest cancer center, building financial forecasting models. How did a hospital experience prepare you for Wall Street?

More than you might expect. Gustave Roussy handles massive cash flows from wildly different sources: government subsidies, insurance reimbursements, research grants, and private funding. Far less predictable than a company's quarterly revenue cycle. My job was to forecast cash surpluses using time-series methods so leadership could plan investments without jeopardizing operations, then design allocation strategies for excess liquidity. Same fundamental challenge as in banking, really: messy, complex data that you need to compress into a signal decision-makers can act on.

Your educational path is unusual: three years of intensive preparatory classes in France, then CentraleSupélec, then UC Berkeley for financial engineering. How did that sequence shape your approach?

French preparatory classes are unlike anything in the American system. For two to three years, you study mathematics, physics, and computer science, and this surprises people – philosophy. The curriculum is designed to produce not just technically strong minds but people who can reason about complex problems from multiple angles. Admission to the top engineering schools is determined by a national ranking exam where candidates compete directly against one another, and those three years wired my brain for rigorous, proof-based thinking. You learn to never trust an intuition until you've verified it formally. CentraleSupelec then gave me applied mathematics and engineering foundations: optimization, stochastic processes, and machine learning. Berkeley, which runs the top financial engineering program worldwide, connected all of that to real market problems. Suddenly, stochastic calculus became option pricing, and statistical learning became credit scoring. Each stage built on the previous one, and I believe that combination of pure mathematical discipline with practical finance training is what allows me to operate at the level my current role demands.

Last year you were admitted to the Business & Economy Hub of the Alliance Top Association, a prestigious international organization. What drew you to that community?

Quantitative finance can be an insular world. You spend your days deep inside models and datasets, surrounded by people who think in the same mathematical language, and it's easy to lose perspective on how your work connects to the broader economy. Alliance Top brings together entrepreneurs, investors, business leaders, and economists from very different backgrounds and geographies, and that mix is exactly what I was looking for. The admission process itself was thorough, a panel review, documentation of professional contributions, recommendations from existing members, which told me the organization takes its standards seriously. For me, being part of that hub is a way to step outside the JPMorgan bubble, exchange ideas with people who think about markets and growth from completely different angles, and ultimately bring a wider lens back to my own research..

What advice would you give someone considering a career in quantitative risk modeling?

Invest heavily in mathematics, not the kind you use for homework, but the kind that teaches you to think about uncertainty, randomness, and estimation under imperfect conditions. Stochastic calculus, time-series analysis, probability theory – these aren't just coursework requirements; they're the language you'll use every single day. Beyond that, learn to communicate. Some of the most brilliant quantitative work fails to have an impact because the person who built the model can't explain it to a portfolio manager or a regulator in terms they can act on. I spend a significant portion of my time preparing materials for non-quantitative stakeholders, and that skill matters just as much as the modeling itself. Don't underestimate unglamorous work either: data cleaning, documentation, model validation. These tasks separate a model that lives in a notebook from one that runs in production at a major bank. And one more thing – I volunteer as a math and science tutor for students from disadvantaged backgrounds, and I've found that teaching others forces you to understand your own field at a deeper level. If you can explain stochastic processes to a teenager, you can certainly explain your model to a managing director.

Related Stories

No stories found.
logo
Analytics Insight: Latest AI, Crypto, Tech News & Analysis
www.analyticsinsight.net