Artificial Intelligence

As AI Takes on Decision Making in Finance, the Risk Is No Longer Just Prediction

Written By : IndustryTrends

Artificial intelligence has long been used to analyse markets. Now, it is beginning to act within them.

Across trading desks, asset managers and financial infrastructure providers, the role of AI is shifting from generating signals to influencing decisions. This transition from observation to action is quietly redefining how risk is understood in modern markets.

For years, AI in finance has been framed around prediction, forecasting price movements, identifying patterns and optimising strategies. But as systems become more integrated into execution and workflow processes, the implications are becoming more complex.

At the centre of this shift is a simple but critical question: what happens when an AI system is wrong?

This issue was a focal point at the recent Agentic AI and Automation in Finance Summit in Atlanta, where discussions moved beyond model performance and toward system accountability. During a panel featuring Kaushal Sheth of GFT Technologies, alongside Juan Mendez of BlackRock, the conversation highlighted how AI’s expanding role is introducing a new category of operational risk.

The challenge is no longer limited to whether a model can generate accurate outputs. It is whether those outputs, when embedded into decision making processes, can be trusted under real market conditions.

Agentic AI systems designed to operate across multiple stages of financial workflows are accelerating this shift. These systems can analyse data, generate insights and in some cases trigger actions without direct human intervention. While this improves efficiency, it also compresses the margin for error.

In traditional models, a flawed signal might be ignored or filtered. In autonomous systems, the same flaw can propagate into execution.

That distinction is subtle, but significant.

As Kaushal Sheth noted during the discussion, understanding how systems behave during abnormal market conditions is becoming more important than their performance during stable periods. Financial markets are defined by regime shifts, moments when correlations break, liquidity disappears and historical patterns lose relevance.

These are precisely the environments where AI systems are most likely to be tested.

Yet they are also the hardest to simulate.

This creates a structural gap between development and deployment. While models can be trained on vast datasets, real world validation still depends on exposure to unpredictable market conditions. The feedback loop is slower, and the consequences of failure are more immediate.

For institutions, this is forcing a reassessment of how AI is integrated into core systems.

The focus is shifting toward controllability, transparency and resilience, not just performance metrics. Systems must not only produce strong results, but also fail in predictable and manageable ways.

Through both his work at GFT Technologies and his involvement with Otonomii, Sheth has been engaged in building AI architectures that prioritise this balance between autonomy and oversight.

The broader implication is clear.

As AI continues to evolve within finance, the competitive advantage will not simply lie in developing more advanced models. It will depend on how effectively those models are governed once deployed.

In markets where uncertainty is constant, the risk is no longer just about being wrong.

It is about what happens next.

Ethereum News Today: ETH Climbs as ETF Inflows and Supply Crunch Lift Outlook

Dogecoin News Today: DOGE Slips Below $0.0900 as Traders Watch a Critical Support Zone

Solana News Today: SOL Slides After Drift Hack as SoFi Launches 24/7 Banking

Crypto News Today: CoinShares NASDAQ Debut Sinks 21.73% as Crypto Stocks Weaken

Bitcoin News Today: BTC Price Falls After $69K Rejection as Oil Jumps and Dollar Gains