In today’s SaaS landscape, the pace of innovation can be relentless. Dozens of microservices, hundreds of weekly deployments, and rapid customer growth have become the new normal for high-growth software companies. Yet, behind this momentum, one recurring tension continues to surface: how can organizations move fast without compromising security?
For Amrit Pal Singh, an experienced security engineer and architecture specialist in cloud-native systems, this question has defined much of his career. With experience leading security architecture for a creative-cloud marketplace powered by more than forty AWS microservices and a hundred engineers, Singh has spent years proving that secure-by-default doesn’t have to mean slow-by-design. His philosophy is simple but profound—security should be invisible when done right.
“Security shouldn’t feel like a tax on development,” Singh, a member of the editorial boards of SARC and ESP journals, says. “When the systems are designed well, the secure path becomes the easiest one. Developers don’t have to think about compliance or policies—it’s already baked into the way they build.”
The scale of the challenge Singh faced was formidable. The platform he helped secure operated as a network of interdependent services, each maintained by different teams, each deploying independently. With such distribution, the risks multiplied: inconsistent infrastructure-as-code practices, secrets stored across repositories, duplicate scanning results, and fragmented toolchains that made visibility nearly impossible.
“Everyone wanted to move fast, but the systems didn’t move in sync,” Singh recalls. “We had 5 different scanners flagging the same issues and no single view of where the real risks were.”
To address the problem, Singh and his team adopted what he describes as the paved-road model—standardized templates embedded with secure defaults for every layer of the stack. They built infrastructure-as-code modules that automatically enforced least-privilege permissions, secure storage, and network controls. Security checks were encoded as policy-as-code and integrated directly into continuous integration pipelines, turning compliance from a manual process into a real-time system of assurance.
The results were transformative. Deployments became faster, not slower, because engineers no longer had to navigate ad-hoc review processes. Every code change automatically attached machine-verified evidence of compliance—creating a system that was not only secure, but audit-ready by default. “We stopped chasing vulnerabilities reactively,” Singh says. “Instead, we built systems that prevented them in the first place.”
What distinguished Singh’s approach from traditional enterprise security models was his emphasis on developer experience. He understood that heavy-handed security slows teams down and creates tension between engineering and compliance. His solution was to embed security into the tools and workflows engineers already used, ensuring that the right guardrails were in place without adding friction.
“The best way to win developer trust,” he explains, “is to make security seamless. When your pipelines fail for the right reasons—with context, transparency, and auto-fix guidance—developers start to see security as an enabler, not an obstacle.”
Under Singh’s leadership, the platform transitioned from a fragmented set of independent systems to a cohesive security ecosystem where policies were enforced automatically at scale. Secrets management was centralized to prevent sprawl, identity access was unified under a single provider, and scanners were streamlined to focus only on what mattered.
“It’s easy to drown in findings,” Singh notes. “But not every issue carries equal weight. We learned that time-to-fix, by severity, was a far better metric than the number of vulnerabilities closed. The question isn’t how many issues you found—it’s how quickly you can resolve the ones that matter most.”
Through his experience, Singh has seen how security debt can quietly accumulate beneath the surface of rapid innovation. The hidden costs—lost developer hours, delayed releases, and audit rework—often outweigh the perceived gains of moving fast without structure. He argues that the true expense comes not from implementing security controls, but from retrofitting them after the fact.
He also warns against the proliferation of disconnected tools. Too many scanners or dashboards create noise rather than insight, leading to “scanner fatigue.” Instead, Singh advocates for a tightly integrated toolchain—one reliable scanner for each layer of the stack: code, container, and cloud. “Tooling only works when it serves visibility, not vanity,” he says. “The goal is to know your security posture at any moment, not to chase endless reports.”
A similar philosophy underpins his approach to secrets management. In distributed systems, secrets can multiply uncontrollably, leaving organizations exposed to breaches. Singh emphasizes visibility over rotation alone: “You can’t just rotate secrets forever and call it hygiene. You need to know where they live, how they’re used, and who has access. That’s real control.”
As generative and agentic AI weaves into modern SaaS, Singh sees a fundamental shift in the security landscape. "Every new AI feature is another entry point," he warns, but these new doors open onto a landscape where the very logic of the system can be turned against itself. The attacks are less about brute force and more about deception.
He points to threats like data poisoning, where an attacker subtly corrupts the AI's training data. "Imagine an attacker feeding an AI thousands of documents where a specific, malicious code library is labeled as 'safe,'" Singh explains. "The AI learns this vulnerability as a fact. Months later, it confidently recommends that same malicious library to a developer, effectively creating a backdoor in your product."
Then there are evasion attacks, which trick a fully-trained model. An attacker could craft a fraudulent invoice that looks normal to a person but has been minutely altered to fool the AI's fraud-detection model, causing it to be approved. In this scenario, the AI isn't compromised; it's simply outsmarted.
This danger multiplies, Singh notes, when these models aren't just responding but autonomously acting.
"An agent that can be tricked into approving a fake invoice is one thing," he cautions. "An agent with the power to pay that invoice, delete the logs, and then email a confirmation to the attacker is a catastrophic failure." His core message is stark: "If your model can read, write, or act, you must prove it's doing so responsibly. AI doesn’t change the rules of security—it amplifies their importance."
For Singh, the most significant shift in modern security thinking is cultural. The strongest architectures, he believes, come from teams that view security as part of their craftsmanship, not as a separate checklist. His work focuses as much on education and mentorship as on engineering, helping developers understand why controls exist, not just how to pass them.
“You can’t audit your way into being secure,” he says. “You build security through trust, shared understanding, and smart automation. It’s not about saying no—it’s about showing a better way to yes.”
As a published author in various journals, researching reinforcement learning for distributed AI systems, secure applications and more, Singh continues to influence the industry conversation around scalable, developer-friendly security practices. His contributions underscore a broader shift in the field—one where security is no longer a gate at the end of the process, but the foundation beneath it.
In his view, secure-by-default is not an ideal to aspire to, but a principle to operationalize. “If your architecture can make security effortless,” Singh concludes, “then you’ve done more than protect your systems—you’ve built trust into the very fabric of how your software evolves.”