Podcast

AI Orchestration & Decision Intelligence: Newgen Software’s Varun Goswami on Enterprise-Scale Automation

How Enterprises Can Turn AI Experiments Into Governed, Scalable Decision Systems for Real Business Impact!

Written By : Market Trends

Artificial intelligence is rapidly becoming part of enterprise operations. Organizations across sectors, including banking, insurance, and healthcare, are adopting AI to automate workflows, improve decision-making, and enhance customer experiences. However, many companies still struggle to move beyond isolated pilots. 

This episode of the Analytics Insight Podcast explores how enterprises can transition from experimentation to enterprise-wide AI orchestration. Varun Goswami, Global Head of Product and AI at Newgen Software, shares insights into building intelligent systems that integrate automation, governance, and contextual decision-making to deliver measurable business outcomes.

1. Tell us about Newgen Software and your role in the company.

Ans: Newgen Software is a 30-year-old product organization based in India. We specialize in content management, process management, customer communication management, and AI. Our philosophy has been to enable enterprises to work seamlessly on the basis of content. We work with some very large enterprises, banks, insurance companies, and healthcare organizations. We help them organize their content and automate the process based on that content. 

I’ve been with Newgen for 28 years now. I started as a programmer, and today I'm managing the product portfolio for Newgen, which includes our ECM, BPM, CCM, and AI products. My responsibilities are: one, the product roadmap; two, the entire backlog; three, maintaining their backlogs; and four, setting the direction we are headed.

2. What separates organizations that are merely adopting AI from those that are orchestrating intelligent enterprises for decision excellence?

Ans: If you give a prompt to an LLM, yes, you will get some response out of that LLM, right? There's been a democratization in the sense that almost all organizations have access to the same set of tools, whether it is OpenAI, Gemini, Claude, or any other LLM. What differentiates a successful company is the ability to take an LLM's output and incorporate it into its real-life workflows, whether it's a regulator deciding whether to approve an application or a bank deciding on a very large commercial loan. 

3. What does it take to make them work within a governed and accountable enterprise framework rather than just isolated tools?

Ans: The first and most important need is a platform to orchestrate this end-to-end journey. Now, on this platform, people and systems are working. Now, if you are looking at an AI agent, that also needs to work on some part of this, maybe a claim approval agent, or maybe a credit decisioning approval agent. Now, these agents need to work in a context. They need to understand the policies that they are governed by. 

They need to understand what the new application is, who the customer is, and what the claim is all about. All of this context comes from the same orchestration layer, and once they've made the decision, they need to give that information back to it again. 

4. How does grounding AI in a trusted enterprise context change the quality, reliability, and audibility of decisions?

Ans: I think that has been one of the biggest barriers to the adoption of AI. Enterprises simply cannot have that kind of behavior where they cannot trust the AI agent's output. You know, when business leaders say that they do not trust AI decisions, what they usually mean is that they cannot explain how that decision was reached. 

Two, they cannot verify what information it was based on. Three, they cannot demonstrate to a regulator or a customer that the process was sound, that they took the right steps or the right initiatives to ensure that, if there is some wrong content, it does not hit the process or impact the process. So I think there are multiple layers where NewGen has been solving this problem.

5. What does it take to move from fragmented pilots to production-scale AI orchestration across the enterprise? 

Ans: This generative AI pilot trap is, like, one of the most common challenges that we hear about. So what is happening out there is that when new technology comes out, there's a sense of excitement about what it can do. So almost all organizations follow the same blueprint. They create small teams, ask those teams to run pilots, and then the teams run a proof of concept or a pilot. It works, and then, the leadership gets encouraged, “Okay, this is something that we can do.” 

Then the question comes up: how do we scale this? That is exactly the area where most organizations stall. This is not an AI problem. I mean, AI is doing what it is doing. It'll do what it is good at, right? The problem is that the right questions were not asked before the pilot. 

To know more about the discussion, listen to the full podcast.

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

Bitcoin Funding Rates Turn Negative as BTC Rebounds to $76.9K

Practical Ways to Use Crypto in 2026: Earn, Borrow, Spend, and Manage on Clapp.finance

Coinbase CEO Says Stablecoins Could Cut $60B Remittance Fees

Canada Proposes Nationwide Ban On Crypto ATMs Amid Rising Fraud Concerns

Crypto News Today: Israel Approves BILS as First Regulated Shekel Stablecoin on Solana