Artificial intelligence is quickly becoming a standard component of business functions.
Companies across industries are deploying AI to improve efficiency, automate workflows, and make faster decisions. The advantages of this technology are obvious, yet its fast implementation creates new hazards that organizations need to learn to control.
This episode of the Analytics Insight Podcast explores the most important governance issues that AI systems face in the current era. Raghuveer Kancherla, co-founder of Sprinto, explains how emerging risks such as shadow AI, hidden supply chain dependencies, and autonomous decision-making systems are changing the cybersecurity landscape.
Ans: Sprinto is an autonomous trust platform built for fast-growing companies. For startups, this means you know, we get them compliant with certifications like SOC 2, ISO 27001, or whatever their customers are asking for, so they can build the trust required to win their customers' business. From an enterprise standpoint, this means standing up an enterprise-grade end-to-end trust program that provides a single pane of glass for all their obligations.
Currently, my focus at Sprinto is on ensuring that trust is accessible to businesses of all sizes. I focus on our strategy to achieve this for our customers, translating it into short- and mid-term goals and keeping the entire team on track to meet them.
Ans: Employees are using generative AI tools daily. Companies are embedding AI into their products. Internal agents are making decisions. They're triggering workflows and interacting with sensitive data. The scale of adoption is outpacing the scale of oversight, and that's where we need to recognize that the old ways of governing our data will no longer work. This is where we need to use AI to govern AI. Traditional governance generally assumes systems are predictable, mostly static, and change very slowly.
New tools are cropping up almost daily, and you just cannot manage them in an episodic or periodic manner. So AI for AI means using intelligent, continuous methods to monitor, evaluate, and govern other AI systems in near real time. In simple terms, when machines start making decisions, you need machines that can keep up with these machines so that the guardrails are also moving at the same pace as the technology.
Ans: Shadow AI is actually about unapproved intelligence acting on your data. Today, AI tools are growing from the bottom up, and many have freemium models, which means key adoption does not start with procurement. It starts with an employee trying to move faster, and what's really happening is that, with so much adoption of AI via ChatGPT, everybody knows AI can make things faster and better for them. People are trying new tools almost daily.
From their perspective, they save time, and none of this comes from bad intent. However, from a governance perspective, you've compromised sensitive customer data, leaving the organization without visibility. So this is shadow AI, and these kinds of tools are cropping up daily.
Ans: When companies adopt AI, they often think they're adopting a single tool or model, but in reality, they're plugging into an ecosystem. A foundational model built by one company, inferences are routed to different models, and other companies do data tagging. Sometimes your data is used for training, sometimes it's not. It depends on your license. So in essence, what is really happening here is that there are vendors behind vendors. Each layer adds dependency and risks, and the challenge is that these systems can change quietly.
Unlike traditional software, AI does not always behave in a fixed way. This makes oversight very complex, and when you depend on layers you cannot see, exposure grows, whether or not it is visible to you.
Ans: When AI is moving things as fast, the important thing for us to do is to ensure that the governance and the guardrails around what you're building with AI are also moving just as fast, and that can only happen with AI by itself. This, if you really think about it, is just ensuring that you're using the new technology to guardrail it well. If you think deeply about this part of it, the velocity with which things are moving, you can no longer govern these systems manually; it needs to become autonomous.
AI also makes much of this governance possible. If we adopt it correctly, it will enable us to move to AI and reach the next level of evolution in this with confidence.
To know more about the discussion, listen to the full podcast.