The Blind Spots in AI Security That Could Cost Us All: Brian Stevens, SVP & CTO – AI, Red Hat, Explains the Unsolved Challenges
Enterprises today aren’t struggling to build AI models. They’re struggling to operationalize and scale them securely across hybrid environments. The “last mile of AI adoption” has emerged as one of the biggest barriers to enterprise success, slowing down time-to-value and creating inefficiencies in deployment.
Brian Stevens, SVP & CTO – AI at Red Hat, is tackling this challenge head-on with the Red Hat AI Inference Server, built on the open-source vLLM project and enhanced with Neural Magic optimizations.
In this conversation, he explains why open source is the foundation of enterprise trust in AI, how organizations can bridge the gap between experimentation and scale, and the evolving skill sets leaders and developers need to thrive in an AI-powered enterprise future.
With Red Hat's AI Inference Server, how are you reimagining the last mile of AI adoption, taking models from lab experiments to enterprise-scale, secure deployments?
The "last mile" of AI adoption is where most enterprise initiatives stall. Organizations can build successful proofs-of-concept around specific use cases, but scaling these models to serve thousands of users and agents while maintaining security, performance and operational efficiency is the real challenge.
Red Hat's AI Inference Server reimagines this by treating AI deployment as a journey across hybrid environments. Built on the proven vLLM project and enhanced with Neural Magic's optimization technologies, our approach addresses three critical gaps: infrastructure reality, economic sustainability, and security without compromise.
Most enterprises need AI solutions that work with existing investments—whether on-premise, multi-cloud, or edge locations—without forcing infrastructure overhauls. Our solution delivers 3-5x better price performance while enabling deployment within existing security perimeters. What once took months of custom engineering now takes just days, with the confidence that solutions will scale securely as demand grows.
What's the most overlooked factor in measuring AI's business impact?
The most overlooked factor is time-to-value—not just technical deployment time, but the end-to-end timeline from AI investment to measurable business outcomes. Many enterprises focus intensely on model performance but underestimate the operational complexity of scaling inference, which is ultimately where the business value is unlocked or lost.
Organizations spend months perfecting a model that achieves 95% accuracy instead of 93%, then take another six months to integrate it with business processes and train teams. Meanwhile, competitors deploy "good enough" solutions quickly and start generating ROI immediately.
Real business impact stems from three interconnected factors: adaptation velocity (the speed at which you can modify solutions as needs change), organizational learning acceleration (how AI compresses decision-making cycles), and compound value creation (how each AI project makes the next one easier).
We're seeing forward-thinking organizations start to measure "AI momentum." They're tracking how AI capabilities in production accelerate other business initiatives and create these compounding competitive advantages. This is the fundamental difference between treating AI as just an infrastructure cost and understanding its value as a strategic asset.
How do you see open-source shaping Red Hat's role in making vLLM enterprise-ready? What bold shifts do you predict next?
Open source is emerging as the trusted platform layer for enterprise AI. While the industry debates proprietary versus open models, enterprises need AI solutions they can trust, modify, and control. Open source provides exactly that foundation.
Enterprise AI isn't just about powerful models—it's about having the confidence to bet your business on them. When customer service, supply chain, or financial operations depend on AI, you need transparency, control, and adaptability. Open source provides all three in ways that proprietary solutions cannot.
Our role with vLLM has evolved into "enterprise-grade open source stewardship." We actively strengthen projects like vLLM for enterprise use while contributing improvements back to the community, creating a virtuous cycle where enterprise requirements drive innovation, benefiting everyone.
I predict three bold shifts already underway: First, the rise of "AI supply chain" thinking, where enterprises demand visibility into model provenance, training data, and security vulnerabilities. Second, community-driven enterprise features emerging from collaborative problem-solving rather than vendor roadmaps. Third, "AI infrastructure independence"—enterprises choosing solutions that provide a choice of models, cloud providers, and hardware accelerators.
The most significant shift will be enterprises realizing open source AI isn't just cost-effective—it's strategically superior. When you can inspect, modify, and optimize your AI infrastructure, you're building a competitive advantage, not just buying solutions.
What new skill sets should developers, IT teams, and even business leaders prioritize to stay relevant in this AI-driven enterprise world?
The real AI skills gap isn’t about coding — it’s about systems thinking. Too often, people chase Python tutorials or transformer papers, but the real advantage comes from seeing how AI fits into larger business systems.
For developers, that means becoming AI integrators — orchestrating services, managing data flows, and building architectures that evolve as AI evolves.
For IT teams, it means mastering AI operations — deploying an open source platform which becomes the foundation for efficient, scalable model serving and agentic integration.
For business leaders, it’s about AI product sense — knowing where AI adds genuine value, where it doesn’t, and how to align it with strategy.
And for everyone: the future lies in hybrid intelligence—the ability to design human-AI collaboration, embed ethics into practice, and guide organizations through change.
The professionals who thrive won’t be the ones who just “know AI” — they’ll be the ones who know how to make AI work with people, at scale.
Enterprises that develop these capabilities systematically, treating AI skills as ongoing strategic initiatives rather than one-time training, will lead their markets.