

If we were to follow only headlines about AI and layoffs, it would be fair to think AI has arrived with a pink slip in one hand and a code generation model in the other.
What we observe within enterprise engineering is much less dramatic and more impactful. AI does speed up certain tasks, shortens feedback loops, and reveals flaws in process design.
However, it does not assume accountability for production systems.
Releases still require ownership. Architects still demand judgment. Regulated workflows still need validation. So, if a live system fails, nobody says, “Let’s ask the prompt what went wrong and hold it responsible.”
Real people are still the ones who get called, investigate, fix the issue, and answer for it. That's why the AI story is better understood as a story of reinvention.
While a model can draft code, summarise a requirement, propose test cases, or refactor a module, enterprise systems must navigate the challenges of integrations, business rules, access controls, observability, compliance, resilience, auditability, and the operational risk of getting one small thing wrong at scale. McKinsey’s latest global survey captures this shift well. 78% of respondents say their organisations now use AI in at least one business function, yet only 21% say they have fundamentally redesigned at least some workflows.
This gap reveals the true story. Because acquiring tools is easy, but transforming the work process is the hard part. From an industry perspective, the most revealing change is this: mature teams have stopped obsessing over what percentage of code is AI-generated- that’s a vanity metric.
The real questions are different. Are teams delivering faster? Can product managers move from rough use cases to better design input more quickly? Do developers have more leverage? Are testing and provisioning becoming more sophisticated? And just as crucial, is there still a human in the loop before the final staging or production?
Enterprise software requires architectural judgement, security assessments, regulatory adherence, and domain expertise. It relies heavily on trust. It operates within complex workflows, legacy systems, and critical business processes, where errors can have serious consequences. Therefore, it’s not just about accelerating tasks like code generation; it’s about ensuring the software is reliable, defensible, and stable in production. This is why human oversight is essential. People must use their judgment to interpret intent, handle uncertainty, and develop solutions that meet real business needs.
Based on my observations, many organisations still do not consistently review all gen-AI outputs, indicating that enterprise discipline remains uneven. The new premium, therefore, is now on judgment, design, and oversight.
This is also why it would be reductive to view AI merely as a labour story. I see it as a story of capability. The focus is shifting away from quickly writing basic code towards system design, domain expertise, model oversight, validation, and cross-disciplinary judgment.
India, interestingly, has both momentum and homework.
The momentum is clear: India’s relative penetration of AI skills is 2.5 times the global average across the same occupations; UNESCO states that 40% of Indian respondents already report significant or full AI usage, with 94% expecting AI budgets to increase next year. The challenges are equally obvious. The constraint is no longer access to models or infrastructure; it is institutional capabilities. Building AI-native organisations requires more than just technical talent; it demands governance, training, and fundamental operating-model redesigns. Without these, enterprises risk scaling activity without necessarily scaling impact.
So, is the layoff story still relevant?
McKinsey finds the strongest expectations of headcount reduction in service operations and supply chain, while IT and product development are more likely to see headcount increases than decreases.
Many respondents anticipate minimal changes in overall workforce size in the next three years. That matches what many of us are witnessing on the ground: low-context work is being exposed; high-context work is gaining prominence.
Therefore, it’s not about AI replacing talent, but rather about how soon we can reshape roles, workflows, and learning systems to ensure that humans remain in roles where their judgment is critical.
Organisations that succeed in this phase will ask, “How many people can we elevate to a higher level of work before our competitors do?”