

In modern software teams, complexity has outgrown what any single engineer can track alone. Systems stretch across vehicles, stores and phones, with services speaking over networks rather than function calls. Releases are more frequent, incidents are more costly and expectations around safety and reliability keep rising. In that environment, coding is no longer just about translating requirements into syntax; it is about making sound decisions under pressure with incomplete information, while keeping systems understandable over years.
Aniruddha Maru, a Vice President of Infrastructure at Standard AI and an IEEE Panel reviewer, builds platforms for exactly that kind of pressure. His operating principle is straightforward: keep humans in charge of architecture and safety, and treat AI assisted coding as a skill every engineer can learn, not a shortcut that replaces judgment.
That operating principle matters because AI assisted tools have already moved from experiment to expectation. Recent survey data shows that 76% of developers are using or planning to use AI tools in their development process, and broader adoption studies in 2025 report that around 84% of developers now use or plan to use AI coding assistants, with many relying on them every day. What used to be side projects are now standard parts of the toolchain, and hiring managers increasingly assume that engineers can work productively alongside these assistants.
As usage grows, expectations shift with it. Teams now want engineers who can explain intent clearly, provide the right context to their tools and evaluate suggestions critically against design and security constraints. That means prompts, context windows and tests are not side notes; they are part of professional practice. For engineers working on large, distributed systems, AI assisted coding becomes an extra pair of eyes on boilerplate and cross-cutting concerns, while humans still decide how services interact and where risk sits.
Maru has spent years in that kind of environment. At a connected vehicle company earlier in his career, he led the transition from a monolithic backend to a distributed system composed of roughly twenty microservices that communicated over a message bus. He wrote the framework that split authentication and authorization so that a reverse proxy at the edge authenticated each request once, and downstream services could focus on business logic while enforcing fine-grained access. That separation reduced cognitive load for developers, made deployments safer and clarified responsibilities for each service. “AI earns its keep when engineers know their system boundaries well enough to accept only what fits and ignore the rest,” says Maru.
As AI assisted coding becomes routine, the question shifts from whether to use it to where it delivers compounding benefits, especially in learning and debugging. Recent productivity research shows that around 68% of developers report saving more than ten hours each week when they work with generative AI, often by turning search, boilerplate and documentation questions into faster conversations. Those reclaimed hours can move into code quality, refactoring and new features instead of routine lookups.
At the same time, the same study notes that roughly half of developers still lose more than 10 hours each week to inefficiencies such as searching for information across fragmented tools and knowledge bases. The gap between time saved and time lost suggests that simply adding AI to the editor is not enough; organizations also need to make history, policies and prior fixes easier to surface. When engineers can bring logs, documentation and examples into the same conversation, AI assisted coding becomes a way to shorten feedback loops rather than just write code faster.
Maru approached that problem directly when he built a public developer platform for a connected car product. As principal server engineer, he designed REST APIs that exposed driving, location and diagnostic data in a way external developers could understand, then implemented a new OAuth2 provider so drivers could grant granular access to their data through “login with” style flows. He created a live documentation site that allowed developers to try the APIs directly in the browser while reading, wrote a webhook server to deliver trip events and a real-time message bus for high-frequency updates such as live location, and built an applications portal where partners could register apps, configure scopes and debug logs. To make the platform concrete, he also developed ten sample applications that covered use cases from automatic mileage expensing to live location sharing. “Good engineers still read the docs and watch the logs; AI just helps them reach the right example faster,” notes Maru, an editorial board member at SARC Journals.
Those time savings only matter if systems stay safe. As AI assisted coding tools become more capable, they also introduce new ways for vulnerable code to slip into production. Security research on AI generated code has found that approximately 45% of analyzed samples contained security flaws, even when the snippets looked production-ready. At the same time, broader security analysis indicates that as many as 90% of vulnerabilities sit in the application layer, which is exactly where AI tools now propose the most changes. When teams rely on assistants without combining them with strong reviews, they risk amplifying weaknesses in the most exposed parts of their systems.
This risk changes how engineers need to interact with AI tools. Instead of accepting suggestions at face value, they must prompt with explicit security requirements, examine how data flows across boundaries and apply the same scrutiny they would use for human-authored code. That includes checking input validation, access control, logging and error handling, especially in services that touch external traffic, sensitive data or safety-critical flows.
Maru’s own systems underline why that caution matters. At the connected vehicle company, he designed a split authentication and authorization framework where a hardened reverse proxy authenticated every incoming request before it entered the rest of the system. After that gateway, microservices enforced authorization so that drivers could see only their own trips and diagnostic information. On top of this, he built a crash alert service that monitored high-frequency telemetry, identified likely collisions in real time and coordinated with emergency call centers and messaging providers to notify both responders and loved ones when help was needed. In that environment, any mistake in request handling or data access could have direct safety and privacy consequences. “If AI writes code for you, treat every line as untrusted until it survives your security instincts,” observes Maru.
Security sensitive systems also show how quickly data volume becomes the real constraint. Connected vehicles are a clear example. Research on telematics indicates that a modern connected car can generate nearly 25 GB of data per hour from more than one hundred signals covering location, speed, and component health. At the same time, market analysis of the global connected car segment notes that nearly 75% of passenger vehicles sold in 2024 shipped with embedded cellular connectivity, up from the prior year. That combination means fleets are now rolling sensor networks, streaming continuous telemetry into backends that must remain responsive as data grows.
In that context, AI assisted coding is as much about data as it is about syntax. Engineers can use assistants to explore schema options, experiment with partitioning strategies and refine query patterns, but they still need a firm understanding of performance, retention and cost. Poor storage and indexing decisions can make even correct code slow or brittle under load, and assistants cannot see operational consequences that have not happened yet.
Maru had to solve those concerns directly when he designed a time-series data system on top of PostgreSQL for the same connected vehicle platform. Off-the-shelf databases struggled with the mix of high write throughput, low-latency queries and evolving schemas required to store and serve real-time drive data from more than two million miles of trips every day. He built a framework that used JSONB to store flexible event payloads while keeping key attributes indexable, tuned the database for time-series workloads and partitioned tables so recent data stayed fast while older records rolled off to archive storage. That foundation supported features ranging from safe-driving scores to usage-based insurance pricing. “The more data a system carries, the clearer its shape needs to be; AI can help explore options, but people choose the design,” notes Maru.
As connected systems, safety expectations and data volumes grow together, the shape of engineering careers becomes clearer. Forecasts for the next decade show that adoption of AI coding tools will continue to climb; one recent projection expects 75% of enterprise software engineers to be using AI code assistants by 2028, up from less than ten percent in early 2023. Market research on AI-focused development tooling estimates that the global AI code tools segment could reach about $26.03 billion in annual revenue by 2030, growing at roughly 27.1% per year from 2024 through 2030. In parallel, workforce studies suggest that the global population of software developers may rise toward 45 million by 2030, creating tens of millions of roles where AI assisted coding will be part of the default toolset.
Maru sees those numbers not as a threat, but as a planning horizon. Alongside his infrastructure work he has reviewed research papers for the Indam 2026 conference hosted by a leading management institute, giving him a view into how AI and data-driven thinking are spreading across industries. For him, the priority is to help engineers build strong instincts around architecture, security and data first, then layer AI assisted coding on top so that they can move faster without surrendering control. The organizations that do that will not only ship features quickly, they will cultivate engineers who can adapt as tools change.
“Engineers who treat AI as part of their craft stay relevant; those who outsource judgment to it fall behind,” says Maru.