Vertiv Holdings Co.

09/08/2025 | Press release | Distributed by Public on 09/08/2025 04:40

Can liquid cooling and power mix safely in the same rack

AI has redrawn the thermal and electrical map of the data center infrastructure. But when liquid and power mix, safety is not assured; it's engineered.

AI acceleration has outpaced Moore's Law, pushing past incremental gains to deliver exponential increases in power, heat, and density, all within a smaller footprint. Rack densities now exceed 140 kilowatts (kW), and modern AI processors blast past 1000 watts (W) thermal design power (TDP). These demands are forcing a shift in high-performance computing (HPC) design where liquid cooling and medium-voltage power must share the same rack.

Integrating liquid and power at the rack isn't the challenge. Scaling integration safely across many racks and sites is. As AI density climbs, risks grow. So do the consequences of failure. Reliability at scale depends on how well teams manage four critical areas: power safety, fluid quality, leak prevention, and operational readiness.

HPC racks often operate at medium-voltage levels, increasingarc flashrisks during maintenance.The more racks you deploy, the more chances for error. Safe operation depends on strict procedures, trained staff, and close coordination between IT and facilities.

More heat requires better fluid management

Fluid quality in high-density cooling systems is an ongoing responsibility, not a one-time check. Even trace contaminants like particles or microbial growth can reduce thermal heat transfer or capacity and damage cold plates over time.Air bubbles in the cooling loop create a frothing effect, disrupting coolant circulation, reducing efficiency, and straining pumps. These are real, costly issues that can grow silently and hit hard without proactive management. Reliability will depend on installation and daily precision across the entire cooling loop.

We're no longer just adding liquid cooling near the racks. We're cooling on top of live chips. A pinhole leak, a loose fitting, or the slightest thermal expansion miscalculation can lead to complete rack shutdowns, lost processing, and expensive downtime. At these densities, small failures carry outsized consequences.

More complexity demands new skills

Conventional data center teams are highly skilled, but the rise of liquid cooling in HPC data centers has introduced maintenance tasks that require expertise not commonly found in traditional IT deployments. Tasks such as pressure management, fluid sampling, leak detection, and coolant replacement necessitate specialized training and processes. The learning curve is significant: teams must adapt to new applications on a large scale while maintaining reliability and uptime. Bridging this readiness gap is essential to safely scaling liquid cooling across live environments.

Commissioning: The first line of safety

Every safe deployment begins before the power switches on. During construction, liquid cooling servicing staff follow an ironclad rule: mechanical systems must undergo thorough evaluation before energizing power.

Commissioning pushes high-density cooling models to their limits under controlled conditions, revealing faults before real workloads touch them. Systems are run in failure modes, stress-tested in tandem, and expected to expose flaws because those are the moments that tell us whether the design can be hardened and tested again.Every pipe is pressure-tested, every joint inspected, and cooling loops flushed until the fluid meets exact purity specifications. It's not logged and left when something fails. It's fixed and signed off. Teams make real-time adjustments, repair weak points, and retest until the system performs as intended under whole operational stress.

Servicing teams begin electrical testing only after assessing the complete integrity of the liquid systems. This strict sequencing, where mechanical commissioning goes first before electrical, prevents catastrophic overlaps, like testing a 700-volt (V) system while a hidden leak escapes detection. If partners commission power and cooling in parallel, the margin for error tends to be relatively narrow.

Operational maturity: From design to lifecycle management

Once power and liquid cooling systems are operational, reliability hinges on more than installation; it depends on how well internal customer engineers (CE) have designed high-density cooling systems to respond under pressure.

That starts with built-in safeguards. Every interface between liquid and power is engineered with layered protection: separation zones, monitored leak trays, and automated shutdown protocols. These aren't afterthoughts. They define how the system mitigates and eliminates inherent risks.

Engineers sit with clients on the get-go, mapping out commissioning plans, failure protocols, and long-term maintenance. That partnership doesn't end at go-live. It continues through daily support: fluid sampling, operator training, system reviews, and performance tuning as computing demands evolve.

Advanced preventive maintenance enables targeted maintenance where teams focus on high-impact items on top of the usual servicing checklist. Additionally, real-time monitoring detects potential anomalies. This enables teams and consulting engineersto intervene immediately, preventing failures and enhancing reliability.

Design for safety, operate with confidence

There are risks, from fluid contamination and leaks to arc flashes.But these aren't reasons to avoid liquid cooling, rather to approach the solution rigorously. With the right design, commissioning protocols, and operational discipline, liquid cooling is both safe and ideal for maintaining performance and uptime at scale.

Explore the Vertiv liquid cooling services today.

Vertiv Holdings Co. published this content on September 08, 2025, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on September 08, 2025 at 10:40 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]