AI-Grade Data Centers for India’s GPU Future: Exclusive Interview with C R Srinivasan, CEO, Digital Connexion
The data center industry is grappling with unprecedented challenges as AI workloads, GPU-intensive operations, and sustainability demands converge at scale. Traditional facilities built for general IT loads can no longer keep up with the customer demands of skyrocketing power densities, cooling needs, and compliance pressures shaping India’s digital economy.
In this exclusive interview, C R Srinivasan, CEO of Digital Connexion, shares how the company is pioneering AI-grade data centers, which are purpose-built to meet India’s rising enterprise AI adoption, GPU clusters, and the sustainability imperative driving the next era of infrastructure.
How is Digital Connexion rethinking data center architecture to meet the specific needs of AI workloads and GPU-intensive operations in India?
AI adoption in Indian enterprises and the need for GPU-intensive workloads in India have transformed how we design data centers, as traditional scaling and retrofitting cannot match the growing need for computing power. Traditional facilities built for general IT loads can no longer keep pace with the power and cooling demands of AI training clusters. In the last few years alone, average rack power densities have more than doubled – from about 8 kilowatts (kW) per rack to 17kW – and projections reach 30kW or more by 2027.
The changing needs of our customers require designing data centers with high-density deployments and specialized infrastructure tailored for AI. For example, our Chennai AI data center, adopts a modular design that facilitates customers who need to scale from small to multi-megawatt footprints, with an individual rack powering up to 150kW of IT load to accommodate dense GPU clusters.
We use integrated cutting-edge cooling technologies to maintain efficiency and reliability at scale. Our next-gen cooling systems allow us to dissipate the heat generated by GPU clusters more effectively than legacy air cooling. Our world-class operational processes enable us to collaborate closely with our customers, ensuring their AI infrastructure is optimized for both performance and energy efficiency.
Equally important is a data center’s electrical backbone. High-capacity power distribution with redundant feeds, onsite substations, and robust backup systems, ensuring that racks drawing tens of kilowatts are continuously powered. In parallel, we recognize that AI workloads are data-hungry and latency-sensitive, so our architecture emphasizes fast connectivity within and beyond the facility.
With sustainability becoming a key differentiator, how is Digital Connexion balancing performance, scalability, and green operations across your current and upcoming facilities?
Sustainability is a core principle in data center design. Performance and green operations are complementary goals that can be approached with innovation. MAA10, our Chennai data center, has achieved IGBC Green Data Center Platinum certification, reflecting design and operational practices that maximize energy and water efficiency. We use state-of-the-art cooling architecture with intelligent controls, so cooling systems run optimally rather than at full tilt 24/7. We have installed on-site solar photovoltaic panels to directly offset a portion of the facility’s power needs. These measures allow us to scale up capacity without a linear increase in carbon footprint.
One key strategy is deploying advanced cooling and power technologies that inherently save resources while boosting performance. We utilize cutting-edge chillers at the Chennai facility, which cut energy consumption compared to conventional chillers.
The frictionless chillers, combined with hot-aisle containment and in-row cooling near the server racks, ensure that even high-density GPU zones remain efficient to cool. At the same time, we’ve implemented a closed-loop chilled water system that drastically reduces water usage, avoiding the excessive evaporation losses of traditional cooling towers.
In the context of India’s rising enterprise AI adoption, what defines an “AI-grade” data center and how does that differ from traditional models?
AI adoption in the Indian market has organically evolved over time. Traditional data centers are optimized for lower-density CPU servers and would be quickly overwhelmed by high-density loads both electrically and thermally. As enterprise AI adoption continues to rise in India, the industry is drawing a clear line between conventional data centers and AI-grade data centers: a facility purpose-built to handle power-hungry, computationally intensive workloads.
The differences start with power and cooling density: a standard enterprise data center might provide 5-10kW per rack for typical servers, whereas an AI-focused site like ours is engineered for massive 30- 150kW per rack loads to accommodate GPU pods and machine learning accelerators.
An AI-grade model, liquid cooling loops, cold plate or immersion-cooled racks, and rear-door heat exchangers replace conventional air conditioning. Digital Connexion’s facilities deploy technologies – such as direct-to-chip liquid cooling and hot aisle containment – to maintain safe temperature levels without a loss of efficiency.
Another hallmark of an AI-grade data center is the network and connectivity infrastructure. AI workloads involve moving petabytes (PB) of data in and out for model training, requiring high bandwidth and low latency that go beyond traditional data center needs.
What are the biggest infrastructure challenges Indian data centers face today when preparing for GPU-as-a-service, quantum-readiness, and dense compute loads?
Preparing for emerging demands like GPU-as-a-service (GPUaaS), quantum computing, and ultra-dense compute loads presents a host of infrastructure challenges. First and foremost is the sheer power and cooling challenge of these technologies. Providing GPUs on demand requires installing hundreds or even thousands in a single facility, placing strain on the data center’s power infrastructure.
Data centers ensure the power capacity and redundancy from the grid, as well as enough backup generation to sustain the workloads. For example, at our Chennai campus, we are expanding available power capacity to over 100 MW to meet the growing demand from GPU clusters and other compute-intensive workloads.
On the cooling side, dense GPU deployments create hotspots. In response, operators are adopting liquid cooling techniques on a broad scale – from direct-to-chip coolers to full immersion tanks – because air cooling alone is inefficient beyond a certain density.
GPUaaS isn’t just about hosting hardware; customers expect cloud-like flexibility, with high-speed provisioning, software-defined networking, and vast, fast storage to feed the GPUs with data.
How can operators drive better power usage effectiveness (PUE) while addressing growing pressure around water consumption and power shortages?
Achieving low PUE is a top priority, and in India, it plays a key part in addressing water scarcity and grid power constraints. One effective approach is modernizing cooling techniques to reduce the power overhead without increasing water use. Traditional data centers rely on water-intensive cooling towers to lower PUE, effectively exchanging electricity savings for high water consumption.
Today, we’re shifting that paradigm. Technologies like direct-to-chip liquid cooling and closed-loop cooling systems can dramatically reduce both the draw on power and water use in tandem. By bringing cooling directly to the heat source, the server chips, and using sealed loops, we eliminate the need to evaporate millions of liters of water in cooling towers while also using far less electricity for air conditioning.
Our Chennai facility has been a frontrunner in adopting such measures with the implementation of state-of-the-art cooling technologies, which ensure that less water is consumed for cooling.
Beyond cooling, power management strategies play a vital role in efficiency and reliability under constrained conditions. In many parts of India, data center operators face the dual pressure of rising demand and potential grid instability or power caps, especially during peak load periods. To drive better PUE, there must be a focus on energy-efficient power delivery, for example, using high-efficiency UPS systems and transformers that minimize losses, so more of the input power goes to IT equipment rather than being wasted as heat. This directly lowers the PUE by trimming overhead.
Finally, water conservation measures ensure we remain good stewards of local resources while improving efficiency. At our Chennai facility, we implement rainwater harvesting and on-site water treatment to recycle greywater for cooling tower make-up for the limited cooling systems that do use water. Additionally, achieving and sustaining a low PUE is not just about infrastructure; it also depends on skilled and proactive data center operations that ensure systems are continuously optimized for energy performance.
This comprehensive approach not only yields a low PUE but also addresses the growing pressure on environmental resources. The result is a data center that runs cooler, cheaper, and greener, even as it delivers the performance our customers expect.
With increasing focus on data localization and AI regulation, how are data center operators ensuring long-term compliance and infrastructure sovereignty?
As India brings its focus on data localization and crafts AI regulations, data center operators need to step up to ensure long-term compliance and uphold infrastructure sovereignty. Thanks to recent laws like the Digital Personal Data Protection Act (DPDPA 2023) and sectoral guidelines, many types of data – from sensitive personal information to payments and financial records – must now be stored and processed within India. Therefore, facilities need to be operated in compliance with these requirements clearly for clients.
When it comes to emerging AI regulations and infrastructure sovereignty, the conversation extends beyond just where data resides; it also encompasses who manages the computing power and the algorithms. We anticipate frameworks that will demand transparency in AI processing, possible parameters for exporting training data or AI models, and requirements for auditability of AI systems. Data center operators are proactively gearing up for these. One aspect is ensuring that India has the necessary AI compute infrastructure domestically so that businesses aren’t forced to rely on overseas data centers for AI development.
In essence, as guardians of the country’s critical data infrastructure, data center operators need to double down on compliance and sovereignty by design. Since we provide the physical stronghold where India’s data and AI innovation can reside safely under national laws, we must ensure that in the digital era, India’s “data wealth” remains within its own borders and under its control.