12/16/2025 | Press release | Distributed by Public on 12/16/2025 10:19
Standardizing cooling strategies for the age of AI
Data centers are the backbone of essential computing services, from securely processing and storing data in the cloud to providing essential infrastructure for artificial intelligence. Servers and other critical hardware generate heat while processing this data; and without proper and efficient cooling, this waste heat can compromise a data center's performance. Liquid cooling can transfer heat away from components like CPUs and GPUs more efficiently than air cooling, and as data centers adapt to the evolving demands of AI, new liquid cooling technologies with even better efficiencies will be needed to support this growing infrastructure.
Berkeley Lab has partnered with industry to develop specifications for liquid cooling of data centers down to the chip level. The Lab continues to collaborate with industry today to address the dramatically greater power and cooling demands of AI-focused systems.
In collaboration with the Energy Efficiency High-Performance Computing Working Group, led by Lawrence Livermore National Laboratory, Berkeley Lab developed specifications for liquid-cooled server racks or cabinets, facilitating broader adoption of efficient liquid cooling solutions. This included industry-standard specifications for the transfer-fluid covering system materials and operation. The liquid cooling transfer fluid specification was further refined with the Open Compute Project and issued as a guideline.
Reducing waste with best practices and online assessment tools
Technology and expertise developed by Berkeley Lab are helping public and private data center operators invest saved resources into other needs that can drive competitiveness. Through the Center of Expertise for Data Center Energy (CoE), the Lab has developed a suite of diagnostic tools, best practices, and technical guidance to enable more effective and competitive data center operations:
These tools allow operators to pinpoint inefficiencies, test retrofits, and model performance impacts before investing in upgrades. Targeted improvements - such as refined airflow management, upgraded computer room air handler controls, and the adoption of thermosyphon-based cooling - achieved an estimated 8% reduction in cooling energy use and over 1 million gallons of water savings annually. These system-level enhancements also improved fault tolerance and operational flexibility, supporting the scalability required by next-generation high-performance computing.
Optimizing data center energy consumption with simulations
Building simulation tools developed by Berkeley Lab are helping solve key problems in data centers.
Meta uses the Lab's Modelica Buildings Library, a free open-source library with dynamic simulation models for building energy management systems, to optimize energy and water use. Carrier, the world's largest HVAC manufacturer, uses Modelica to operate co-located data centers and to develop cooling systems and after-market services for hyperscale data centers.
Berkeley Lab researchers are also partnering with a team led by the University of Maryland to develop a data center modeling tool, MOSTCOOL (Multi-Objective Simulation Tool for Cooling Optimization and Operational Longevity), under the ARPA-E COOLERCHIPS program. MOSTCOOL is a simulation software tool set that can be used to optimize the design of data centers, including power and thermal management systems for lower cooling energy demand and lower cost, while maintaining high reliability and availability. The team is responsible for developing and integrating energy modeling capability (cooling systems and waste heat recovery) using the EnergyPlus engine.
Planning for the future of data center efficiency
Berkeley Lab has supported both the data center and electric industries in planning for a future where computing power has a stable foundation to grow. Best practices and tools from the lab have been adopted from small server rooms to hyperscale cloud facilities. To share knowledge, the Data Center Energy Practitioner Training Program educates the workforce needed to realize upgrades. Consulting with industry, the program regularly updates its curriculum to reflect the state of the art in key areas such as IT equipment, air management, cooling systems, and electrical systems.
In October, close to 150 attendees from industry participated in a listening session jointly hosted by the Lab with partner BP Castrol at the 2025 OCP Global Summit. This event solicited input on prioritizing the most challenging technical barriers, including topics like powering and cooling high-density compute equipment and microchips, and identifying industry-wide practices and trends in the selection of IT equipment across data center types and workloads. This feedback will be used to calibrate industry-wide models of U.S. data center energy use.
###
Lawrence Berkeley National Laboratory (Berkeley Lab) is committed to groundbreaking research focused on discovery science and solutions for abundant and reliable energy supplies. The lab's expertise spans materials, chemistry, physics, biology, earth and environmental science, mathematics, and computing. Researchers from around the world rely on the lab's world-class scientific facilities for their own pioneering research. Founded in 1931 on the belief that the biggest problems are best addressed by teams, Berkeley Lab and its scientists have been recognized with 17 Nobel Prizes. Berkeley Lab is a multiprogram national laboratory managed by the University of California for the U.S. Department of Energy's Office of Science.
DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the U.S., and is working to address some of the most pressing challenges of our time. For more information, please visit energy.gov/science.