A survey of over 1,000 enterprises conducted by S&P Global Market Intelligence in 2025 revealed that 42% of companies abandoned most of their AI initiatives in 2025, representing a dramatic increase from just 17% in 2024. The root cause preventing AI initiatives from reaching the production stage and bringing real value for the business remains the infrastructure. For the global tech ecosystem, where Big Tech companies alone are planning to spend over $300 billion on AI infrastructure in 2025 according to Bank of America Global, this infrastructure represents both a challenge and an opportunity. The difference between a machine learning model that works in a lab and one that serves millions of users in real time is about the infrastructure that deploys, serves and maintains those models at scale. As companies worldwide scale to serve users across the world, understanding how to build infrastructure that works across industries has become critical.
Artem Korkhov amassed nearly a decade of experience solving precisely this infrastructure challenge across three dramatically different industries. As the architect behind Spotify's Salem ML Serving Platform, which reduced model deployment time from six months to fifteen minutes, he demonstrated that infrastructure built on universal principles can scale across any domain. His work spans from fraud detection systems to building AI infrastructure for vaccine design at Inceptive, illustrating that the same engineering discipline is translatable across multiple industries, from entertainment to healthcare.
In essence, ML infrastructure is the operational layer that enables teams to deploy, serve and maintain machine learning models without spending weeks or even months on creating a custom environment for each model. The lack of such an infrastructure often becomes the particular reason why many engineers excel at building models but face difficulties when it comes to the point when the models are required to serve in production environments.
"The core challenges remain universal regardless of industry," explains Artem Korkhov. "They should be able to process billions of requests, handle them reliably even during peak load and operate in real time when milliseconds often matter. The infrastructure required to achieve this is fundamentally different from the one required to develop the prototype."
In addition, the infrastructure should provide an environment for efficient development, allowing teams to deploy models and introduce new features without conflicts or service interruptions. The business impact of resolving such a challenge is equally universal, whether we speak about a fintech company that needs to process thousands of transactions or a biotech startup running AI-driven drug discovery pipelines: organisations that master this infrastructure reduce time-to-market drastically, enabling faster iteration and gaining a competitive advantage.
An illustrative example of tangible benefits brought by improved infrastructure comes from Artem Korkhov's work at Spotify, where he was the author and founder of the infrastructure team behind the design and launch of the Salem ML Serving platform. The goal was to create an environment that enables dozens of teams to deploy ML models for various tasks, such as music recommendations or playlist generation. Eventually, Salem became one of the most traffic-intensive systems at Spotify, serving ML predictions to over 400 million users worldwide, while up to 95% of Spotify's ML products are being served through it.
"The change became possible thanks to several key architectural decisions," comments Artem Korkhov. "For instance, a unified API allowed to handle diverse use-cases in a standardised way, and integrating data collection directly into the serving layer allowed teams to collect data more efficiently."
One of the most impressive practical consequences of the infrastructure change was the drastic reduction of the deployment time, which collapsed from six to nine months to just 15-30 minutes, translating directly to cost savings. Notably, Artem was one of the original authors of the platform’s system design and the founder of the team responsible for its development and long-term operation – an assignment typically reserved for a very small number of senior engineers within large technology companies. The platform developed by Artem Korkhov's team was presented at multiple conferences and featured on Spotify's engineering blog as an example of the approach that may become directly applicable for companies worldwide that aim to scale their ML operations. That work did not go unnoticed: it later formed the basis for Artem Korkhov’s recognition at the American Business Expo Award 2025 in the Solution of the Year category for Machine Learning.
The versatility of the approach was proven by the fact that in the following years, Artem Korkhov implemented it across different industries. For instance, in 2022, he joined Feedzai, a Portuguese fraud detection provider, as a Staff Software Engineer. The stakes for creating reliable infrastructure became even higher: where incorrect music recommendations might annoy a user, errors in fraud detection can cost millions of dollars or enable criminal activity. Here, Artem Korkhov founded the ML infrastructure team and designed a cloud-native Model Serving infrastructure, which became one of the first systems of its kind at the fraud detection giant. Again, his innovations shortened the deployment cycles from days or weeks to minutes, which means faster response to emerging fraud patterns and more agile protection of customer assets.That end-to-end implementation – combining technical innovation, operational efficiency, and team leadership – later led to Artem Korkhov’s recognition at the Best Business Awards 2025 for Successful Implementation of New Technologies.
Korkhov's work at Inceptive represents the furthest evolution of ML infrastructure – from consumer convenience to mission-critical healthcare. The biotech startup that was founded in 2021 by Jakob Uszkoreit and raised $100 million in investment aims to build an AI platform for designing mRNA molecules used in vaccines and therapeutics.
As Member of Technical Staff, Korkhov designed and developed infrastructure elements for both biologists and AI engineers, managing projects and products while building infrastructure for laboratory experiments, data engineering and AI-driven RNA design. In particular, he developed a search system for genetic data and architected a system for internal laboratory experiment planning, significantly improving research efficiency.
This area of application of ML infrastructure innovation bears particular significance, as the convergence of AI and biotechnology represents one of the fastest-growing sectors in technology. As pharmaceutical companies explore AI-driven drug discovery and vaccine development, the infrastructure patterns Korkhov has developed become increasingly relevant. Similar approaches would allow major manufacturers to accelerate development cycles while maintaining the safety standards required in the healthcare industry.
"The fundamental lesson is that successful AI deployment requires shifting from thinking about individual models to thinking about platforms that serve many models," reflects Korkhov. "The platform thinking, similar to what we applied at Spotify and Feedzai, is what enables teams to move faster while optimising resources, thus gaining a key competitive advantage."
Across industries, professionals who simultaneously design foundational ML platforms, influence company-wide technical strategy, and contribute to the broader professional community through publications or technical evaluation remain a clear minority within the field.
For the global tech ecosystem, this shift from models to platforms represents a strategic imperative, as scaling infrastructure can become the constraint that limits all other progress. The engineers who master these skills will shape the next decade of AI deployment. The question is not whether infrastructure engineering matters, but whether companies will invest proactively in developing this expertise or continue addressing it as an afterthought.