Artificial Intelligence

The Secret Behind Enterprise AI That Holds Up in Production? Know Where It Breaks Before Your Clients Do

Written By : Arundhati Kumar

Judge at the AI Agents and MCP Hardware Hackathon, Elias Tounzal, improved product quality and built AI tools now used by the world's leading VC firms. The method is repeatable, and it starts with getting the engineering fundamentals right.

The enterprise software world is at a tipping point. Gartner projects that 40% of enterprise applications will embed AI agents by the end of 2026, up from under 5% in 2025. Investment has followed. So have announcements, partnerships, and conference keynotes. The race to embed intelligence into business tools is no longer theoretical. It is happening now, inside the platforms that the world's most influential institutions depend on daily.

But the challenge is not a shortage of tools. There is a shortage of engineers who understand both worlds, the depth of modern software architecture and the practical realities of building AI systems that actually hold up. Elias Tounzal is a Software Engineer at Quaestor Technologies, Inc., the legal entity operating under the commercial name Standard Metrics, a speaker at international developer conferences, and a judge at AI hackathons in San Francisco. He has spent years building products for some of the most demanding institutions in finance and venture capital. That combination of technical rigor, hands-on AI experience, and cross-industry exposure places him at exactly the intersection the industry needs most right now: an engineer who can close the gap between AI's promise and its real-world delivery.

That gap was on full display at the January 2026 AI Agents and MCP Hardware Hackathon in San Francisco, sponsored by OpenAI, CrewAI, and Nevermined, where Elias Tounzal served as a judge. Among the projects he evaluated was precisionBOM: a multi-agent system that deployed four specialized agents in parallel to compress bill-of-materials sourcing from over 40 hours to under four minutes. CrewAI orchestrates our multi-agent system while GPT-5-nano handles inference, the team explained. We found it significantly faster than older models, critical for real-time BOM (Bill of Materials) analysis. The architecture was a proof of concept for what disciplined agentic design can achieve, the same principle behind Standard Metrics' AI Analyst, applied not to venture capital portfolios, but to supply chain operations.

How One AI Tool Replaced Hours of Portfolio Reporting for VC Firms 

Standard Metrics is a SaaS platform that automates financial and performance data collection for venture-backed companies and their investors, serving over 10,000 portfolio companies and more than 100 VC and private equity firms, including Bessemer Venture Partners, General Catalyst, Accel, Salesforce Ventures, Spark Capital, and Madrona. The problem it addresses is straightforward but technically demanding: portfolio reporting involves large volumes of fragmented data spread across dozens, sometimes hundreds, of companies. Turning that data into clear, actionable insight for portfolio reviews, LP reporting, valuations, and diligence is not a simple engineering task.

Tounzal's work on the platform is its AI Analyst, an intelligent assistant that enables investors and founders to query their portfolio data and ask questions in plain language. He first developed an early version: a focused AI chat enabling users to ask questions about a single portfolio company. Then, he designed and built the second iteration, extending the scope to an investor's entire portfolio at once.

"I built the frontend of that agent and contributed to the core backend as well," Tounzal explains. "The goal? Give users a way to ask real questions about their portfolio, and actually get useful answers. Not just summaries. Genuine analysis. That meant thinking carefully about how the agent reasons, what data it pulls from, and how the interface shows users when it's confident versus when it's uncertain."

The AI Analyst launched in January 2026. Within weeks, half of the platform's customers had adopted it. The client list signals the bar this product is held to: Accel, General Catalyst, and Bessemer Venture Partners. These are institutions where data accuracy is not a preference. It is a baseline requirement.

Building an AI product that meets that standard does not begin with choosing the right model or the most capable API. It begins much earlier, with an engineering discipline that was developed and tested long before the AI wave arrived. That discipline, in Tounzal's case, was forged in a different kind of pressure environment entirely.

Testing as a Strategic Investment, Not an Afterthought

Some of the highest-impact engineering work is invisible until something goes wrong. At ActiveViam, where Tounzal was one of the main contributors to the Atoti UI platform, he encountered a problem that many engineering teams quietly defer to: inadequate automated testing. The product was a frontend analytics platform used by major global financial institutions, including JPMorgan Chase, HSBC, Société Générale, and Accenture, in environments where unreliable software carries consequences far beyond user frustration.

Tounzal increased test coverage by approximately 50%, embedded automated testing into the standard development workflow, and became the team's internal authority on testing practices. The downstream effects were measurable: fewer bugs reached production, contract renewal rates improved, and the product became easier to demonstrate to enterprise prospects.

"When discovering a new technology, I take the time to fully understand it, including its trade-offs, and avoid relying on things that work magically," Tounzal notes. "Testing is part of that. You can't trust a system you haven't verified. And in a financial platform used by major banks, trust is everything. A bug is not just an inconvenience; it can affect real decisions, money, and clients."

Know Where It Breaks. Build Tools That Last 

The challenge Tounzal returns to is one many engineering teams are learning to navigate: deploying an AI feature is not the same as deploying it well. From the outside, they can look identical. In practice, the distinction matters enormously.

Agentic architectures introduce failure modes that traditional software does not. AI systems reward teams who invest in the right testing approaches. Maintaining code quality in a codebase that interacts with probabilistic models becomes far more tractable with deliberate design decisions at every layer. The teams that get this right tend to start not with the right API, but with the right engineering discipline.

"The AI wave was impossible to ignore," Tounzal says, "so I rode it, went deep on the technology, and became one of the main contributors to AI-driven tools at my company. But honestly? The foundation never changed. Understand the system. Know where it breaks. Build it so that when it does break, you can find the problem and fix it fast. That's true for a React component and an AI agent. The principles do not change."

Tounzal carries the principles he developed in traditional software engineering directly into AI system design, weighing trade-offs carefully, testing systematically, and always knowing where the system breaks before anyone else does. The method is not new. The environment it gets applied to is.

That discipline is precisely what allowed him to build an AI tool that many of the world's most demanding VC firms adopted within weeks of launch. It is what makes his approach repeatable, and what separates AI that holds up in production from AI that does not. Tounzal's perspective is straightforward: grow his contributions at Quaestor Technologies, Inc., go deeper in the AI space, and eventually build something of his own. The direction is set. The work is ongoing.

The broader lesson his career points toward is one every engineering team can act on today: enterprise AI does not succeed because of the right API. It succeeds because the engineer behind it understands the system completely, builds for failure from day one, and gets the foundation right before anything else. In an industry that moves fast, that kind of depth does not just hold up. It compounds.

Bitcoin Everlight: Transform BTC Holders into Reward Earners

Senate Crypto Bill Gains Momentum as Stablecoin Draft Nears Release

Crypto Prices Today: Bitcoin Price at $74,215, TRON Gains 2.7% as US Stablecoin Bill Advances

BTC Price Rally: After $74K, Can Bitcoin Reach $85K?

10 Most Famous People in the Crypto World