BLOG

Global Liquid Cooling Information- Mar 22th

Panasonic’s Tecnair launches two CDUs for liquid cooling

Cooling firm Tecnair has launched a new cooling distribution unit (CDU) for liquid-cooled data center environments.

The Panasonic-owned firm this month unveiled its new range of CDUs, which it said were engineered to meet the escalating thermal demands of AI and highperformance computing.

Available in 400kW and 800kW capacities, Tecnair said the range provides modular scalability, BMS connectivity with realtime monitoring, and easy access for maintenance. They can be integrated with Panasonic’s ECOiW freecooling chillers using lowGWP refrigerant R1234ze.

Tecnair said the new CDUs operate with a PUE of 1.02.

Launched in 1994, Tecnair offers air cooling solutions to data centers and healthcare environments. Its Techline range includes perimeter and in-row air cooling units, as well as a fan wall offering and outdoor chillers.

Italy-based Tecnair was acquired by Panasonic in 2023. In its own announcement, Panasonic said it was developing CDUs with capacities of 1.2MW and above, with order acceptance scheduled to begin within March 2026.

图片1





DCX launches three CDUs, offering 600kW-2.6MW cooling capacities

DCX Liquid Cooling Systems has launched new cooling distribution units (CDUs).

The company this week announced three new CDU systems built on its proprietary DCX ECDU (Enterprise Coolant Distribution Unit) platform, with cooling capacities ranging from 600kW to 2.6MW. DCX said the three units were designed to cater to a variety of environments, from enterprise data halls and colocation environments to cloud infrastructure and hyperscale facilities.

The Enterprise ECDU 1380/2600 V1 offers 1.38MW to 2.6MW of cooling capacity. The Mission Critical ECDU 1380/2600 V1H2 includes redundant heat exchangers and additional high-availability features. The Entry ECDU 600/1200 V1 model, dedicated to affordable and rapid liquid cooling deployments, delivers up to 1.2MW of cooling capacity in a compact platform. All three are designed for 45°C (113°F) cooling operations.

Poland-based DCX is a manufacturer of liquid cooling solutions for both direct liquid cooling and immersion cooling systems. The company provides in-rack, in-row, and hall-wide CDUs, cold plates, immersion cooling enclosures, dry coolers, modular containerized data centers, and enclosures for crypto mining.

The firm launched an 8MW CDU in January.

图片2





NVIDIA GTC 2026

March 16–19, 2026  |  San Jose, CA and Virtual

NVIDIA GTC is the premier global AI conference, taking place this week throughout San Jose. Join thousands of developers, researchers, and business leaders, live and online, to explore the AI breakthroughs shaping every industry—from physical AI and AI factories to agentic AI and inference.

NVIDIA GTC 2026 Keynote

https://www.nvidia.com/gtc/keynote/


图片3





Nvidia Vera CPU enters full production, pitched at agentic AI workloads

Nvidia's new data center CPU, Vera, has entered full production and is expected to be available in the second half of this year.

The company has traditionally paired its Arm CPUs with a GPU, but has shifted to also offer the hardware standalone due to the rise of CPU-intensive agentic AI workloads.

Nvidia first announced plans for a standalone CPU business earlier this year, signing a deal with Meta for deliveries of both current-gen Grace and upcoming Vera. Core Weave plans to be the first cloud provider to support Vera as a standalone offering.

Other cloud providers set to offer Vera include Oracle, Alibaba, ByteDance, Crusoe, Lambda, Nebius, Nscale, Together.AI, and Vultr.

National laboratories also plan to deploy the CPU, including Leibniz Supercomputing Centre, Los Alamos National Laboratory, Lawrence Berkeley National Laboratory's National Energy Research Scientific Computing Center, and the Texas Advanced Computing Center (TACC).

Server makers set to support the chip include Aivres, ASRock Rack, ASUS, Compal, Cisco, Dell, Foxconn, Gigabyte, HPE, Hyve, Inventec, Lenovo, MiTAC, MSI, Pegatron, Quanta Cloud Technology QCT, Supermicro, Wistron, and Wiwynn.

The chip contains 88 custom Nvidia-designed Olympus cores. Each core can run two tasks, using Nvidia Spatial Multithreading, with physical partitioning. Vera features LPDDR5X memory and delivers up to 1.2 TB/s of bandwidth.

It will be available in both dual and single-socket CPU server configurations.


图片4





Nvidia announces Groq 3 LPU AI inference chip, plans 256-LPU rack

Nvidia has announced the much-expected Language Processing Unit (LPU) chip, which came out of its semi-acquisition of chip designer Groq.

The Nvidia Groq 3 LPU will be available in liquid-cooled LPX racks, which feature 256 LPUs with 128GB of on-chip SRAM and 640 TB/s of scale-up bandwidth. The rack is focused on low-latency AI inference workloads.

On Christmas Eve, Nvidia spent $20bn to license Groq's tech and hired its leadership team, including CEO Jonathan Ross, as well as other staff.

The company aims to combine both chip approaches "to get to that multi-agent future," Buck said.

"These two processors will combine the extreme flops of GPUs and the bandwidth of LPUs into one. Let's contrast them: A GPU with its 288 gigabytes of memory, compared to only 500 megabytes of stacked SRAM. The LPU is only one 500th of the amount of capacity per chip, but the bandwidth is exceptional, [with] 22 terabytes to 150 terabytes per second bandwidth."

The LPX rack will be available in the second half of this year, "coinciding with Vera Rubin.“


图片5





Nvidia unveils Vera Rubin DSX AI Factory Reference Design and Omniverse DSX digital twin

Nvidia has launched the Vera Rubin DSX AI Factory reference design, a guide to building data centers that support its latest CPU-GPU racks and Spectrum-X Ethernet networking and storage.

The Nvidia Omniverse DSX Blueprint, which is a digital twin for large-scale design, buildout, and operations, is also generally available today and supports the new reference design.

The Vera Rubin DSX design was developed in partnership with Cadence, Dassault Systèmes, Eaton, Jacobs, Nscale, Phaidra, Procore, PTC, Schneider Electric, Siemens, Switch, Trane Technologies, and Vertiv.

Rubin DSX includes a number of software libraries. DSX Max-Q is focused on maximizing computing output and token performance per watt on Nvidia systems within a fixed power budget. DSX Flex connects AI factories to power-grid services, enabling them to dynamically adjust power use and orchestrate demand with hybrid onsite generation to save energy and maintain grid stability.

DSX Exchange enables scalable and secure integration of compute, network, energy, power, and cooling plant signals between IT, operational technology, and operations agents.

DSX Sim models validate AI factories as high-fidelity digital twins, using the DSX Air platform to model GPUs, networking, and partner infrastructure, and DSX SimReady connects detailed 3D geometry, logistics, and system behavior.


图片6





Meta unveils next four generations of its MTIA chip

Meta has announced the next four generations of its Meta Training and Inference Accelerator (MTIA) chip.

Dubbed the MTIA 300, 400, 450, and 500, Meta said the new chips have either already been deployed or are scheduled for deployment in the next 18 months, and will primarily be used to support generative AI inferencing workloads.

Each new chip will offer improvements in compute, memory bandwidth, and efficiency, Meta said. Furthermore, in a blog post detailing the chips, the company said that “given the rapid pace of AI innovation,” it has built the capability to ship a new chip roughly every six months.

The MTIA 400, by comparison, offers six petaflops of FP8 compute performance, with a TDP of 1200W. With regards to its HBM bandwidth, the chip provides a 51 percent increase to 9.2Tbps over its predecessor, with HBM capacity totaling 288GB.

For scale-up and scale-out networking, the MTIA 400 provides 1.2Tbps and 100Gbps, respectively. A rack with 72 MTIA 400 devices, connected via a switched backplane, forms a single scale-up domain and can support both air-assisted liquid cooling and liquid cooling technologies already in deployment in data centers, Meta said.

The MTIA 400 has already been tested in Meta’s labs, and the company is “on track” to deploy the chip in its data centers. The MTIA 450 and MTIA 500 are scheduled for mass deployment in early 2027, with both also providing 1.2Tbps of scale-up and 100Gbps scale-out networking.

At the system level, Meta said the MTIA 400, 450, and 500 all utilize the same chassis, rack, and network infrastructure, meaning each new chip generation can be dropped into existing data centers with ease.


图片7





Switch integrates Nvidia Omniverse DSX Blueprint into EVO AI data center design

Data center operator Switch has integrated the Nvidia Omniverse DSX Blueprint into its EVO AI Factory architecture and LDC EVO operating system.

The company announced EVO last year, a new data center design it claims can support up to 2MW per rack.

LDC EVO is Switch's answer to DCIM, with the company claiming it allows for the automation of every system in the data center in "near real-time," with an updated 3D digital twin of the facility.

Last year, Nvidia announced an Omniverse Blueprint for AI factory digital twins, allowing customers to aggregate detailed 3D and simulation data representing all aspects of the data center into a single, unified model, enabling them to design and simulate high-density hardware.

In September, Nvidia announced that it would offer the Omniverse Blueprint, a digital twin that could scale to gigawatt-class AI data centers.

The system is aimed at speeding up data center deployments, while making usure they support Nvidia's DGX designs.


图片8





Musk's xAI gets go-ahead for 41 natural gas turbines in Mississippi to power Colossus data centers

xAI has received permission to install 41 natural gas turbines at a site in Mississippi that will generate 1.2GW to power its data centers in the area.

Elon Musk’s AI company, maker of the Grok chatbot, was granted a Clean Air Act permit by the Mississippi Department of Environmental Quality (MDEQ) at a meeting on Tuesday.

The turbines, which will be installed at a former Duke Energy power plant site in Southaven, will be used to power xAI’s Colossus 2 data center, located a few hundred meters from Tulane Road, across the state boundary in the Whitehaven district of Memphis. They will also supply power to the firm’s upcoming Colossus 3 data center in Mississippi.

xAI purchased the Southaven land in July 2025.

In a statement posted on X, the Musk-owned social media platform, which is also part of xAI, the company said it was “thrilled that MDEQ approved our permanent construction permit unlocking 1.2GW of self-generating power capacity.

xAI uses its data centers in Tennessee to power its Grok AI chatbot. The company came to Memphis in 2024, launching its Colossus supercomputer in a new data center housed in a former Electrolux factory in Memphis’s Boxtown district.

It purchased the site for Colossus 2 last March, and the data center came online in January. Despite Musk claiming it offered 1GW of capacity at launch, satellite imagery taken in January reportedly showed it had cooling equipment installed capable of managing 350MW.


图片9





Microsoft breaks ground on data center in Bergheim, Germany

Microsoft has broken ground on a data center project in Bergheim, Germany.

WDR reports that ground works have begun, with excavators working on moving away loose soil in preparation for a data center development.

A symbolic groundbreaking ceremony was held on March 12, with North Rhine-Westphalia's Minister of Economic Affairs, Mona Neubaur, in attendance. According to Neubaur, it is hoped that this will be a "starting signal" for the Rhenish mining area, and encourage other companies to establish a presence in the area in the future.

Microsoft first revealed plans to build in Bergheim in November 2024. At the time, it was said that the company would develop a 270,650 sq ft (25,144 sqm) data center on a 20-hectare plot of land at the INKA: terra nova industrial park. Construction was originally hoped to commence in 2024, with a launch date of 2026.

The data center will use a closed-loop cooling system, and electricity will be generated exclusively from green wind power.

The Bergheim data center project is part of Microsoft's plans to invest €3.2 billion ($3.44bn) to double its AI infrastructure and cloud computing capacity in Germany. The funding would go towards the expansion of Microsoft's cloud region in Frankfurt, as well as planned infrastructure in North Rhine-Westphalia.

In addition to Bergheim, Microsoft announced simultaneously its plans to develop an 18-hectare site at the new BEB61 industrial estate in Bedburg. This was followed almost a year later, in September 2025, with plans to develop a third site in Elsdorf.


图片10





Hive expanding Buzz cloud footprint in Canada

Hive Digital Technologies this week announced a fourfold expansion of its liquid-cooled data center campus in Canada through its wholly owned subsidiary, Buzz HPC.

The company has expanded on its existing 4MW in Manitoba to 16.6MW of critical IT load across two Canadian provinces in partnership with Bell Canada.

The expansion adds a new colocation facility in British Columbia, providing an immediate 5MW of capacity with an option to scale an additional 7.6MW. The company said the first tranche of capacity will support more than 2,000 GPUs, while the additional 7.6MW could support up to 3,000 GPUs.

In Manitoba, Buzz has deployed 504 next-generation AI-optimized GPUs, consuming approximately 1MW. The remaining 3MW will support approximately 1,500 additional GPUs.

Holmes added that Hive owns and operates other data centers in Canada that are “prime for conversion” to serve hyperscaler colocation and government or military contracts.

Hive was founded in 2017 and has operations across Canada, Sweden, and Paraguay. Its data centers serve both Bitcoin and HPC clients. The company is, however, pivoting to largely focus on its HPC customers.

Aydin Kilic, president and CEO of Hive, added: "This expansion gives us committed liquid-cooled data center capacity across two provinces, and a clear path to over 6,000 next-generation AI-optimized GPUs in Canada. As demand for AI compute ramps, we can move quickly to deploy additional clusters of AI-optimized GPUs online to realize our ARR targets for 2026, while scaling EBITDA in a capex light strategy.”


图片11




NEXT:No more content

Leave A Reply

Submit