3DVelocity logo


Content

Latest

News
Articles

Community

Forum


Main
A few weeks ago I had the pleasure of visiting Bologna, Italy and seeing one of the fastest high performance computing (HPC) clusters in Europe, but curiously it wasn't the fastest cluster that was the most impressive bit of kit on show.

The Cineca computing centre in Bologna is where Italy's universities send their HPC kit to be housed. The site is much like any multi-tenant datacenter you'll find in London, a nondescript warehouse with loads of power and cooling equipment and inside you'll find technicians and some researchers.

So far so mundane, however Eurotech decided this was the time to take the wraps off its Aurora Tigon cluster featuring NVIDIA's Tesla K20 GPGPU accelerator. The firm partnered with the Eurora research project to deploy half-rack Aurora Tigon cluster, but the real news was that thanks to NVIDIA's accelerators the cluster is set to top the next Green 500 list.

The Green 500 is the sister list to the prestigious Top 500 list that ranks HPC clusters based on their Linpack performance. The Top 500 has long been used by companies and countries to portray their technological prowess through raw compute power but that list looks set to go the same way as the space race as firms are far more keen to tout their energy efficiency capabilities.

These days such is the lure of the Green 500 that Cineca's representatives barely mentioned the 10 IBM racks that houses Fermi, a BlueGene Q cluster that is currently ninth in the Top 500 list. For Cineca and its customers, which in this case is universities, the physical limitations of power delivery into the premises means accelerators are not so much of an exotic option but a necessity.

Cheque please

And like many things in Europe these days even HPCs have to be cost effective. As Giampietro Tecchiolli, CTO of Eurotech was extolling the virtues of accelerator-based HPCs almost going as far as to say NVIDIA's Tesla and Intel's Xeon Phi were saviours to the industry by offering more performance for less power, Carlo Cavazzoni, SCAI Cineca cut in adding that it also meant cheaper HPC clusters.

Ultimately Cray, Eurotech, IBM, Supermicro and the many other HPC vendors know that to win business they have to lower the lifetime cost of their clusters. Multi-tenant datacenters are not the clean, shiny and in the case of Google, fashionable computer warehouses that some would make you believe, rather they have limited power and cooling resources that often cannot be expanded upon and thus poses a new engineering problem, getting more GLOPS from the same power budget.

Cool runnings

The Aurora cluster uses water cooling, which in of itself is nothing new, Cray used Freon with its legendary Cray-1 computer. However what Eurotech has managed to do is use standard water with an anti-fungal additive to pass through the servers while making use of grey water to usher the heat away from the data room. Grey water can be collected from rivers or rainfall, making it very cheap.

All of this points to two important changes when it comes to cooling large clusters. Datacenter operators and those that deploy large cluster should be awakening to the fact that air cooling is not going to cut it in the future.

Eurotech isn't the only company talking about liquid cooling, Iceotope recently hit the headlines but I've also seen systems from Boston among others employing variations on this technology. The problem with air is that it takes less energy to raise its temperature for a given volume than water or some exotic coolant. Plus pumping liquid through pipes is considerably easier than pushing vast volumes of air through datacenter aisles.

The cost of cooling is often cited as one of the biggest challenges in the datacenter and this generally refers to the direct costs, such as the price of electricity. Of course electricity prices is a major concern but indirect costs such as maintenance and even data room floor space should be considered and here liquid cooling offers considerable maintenance benefits.

Air cooling requires complex air filtration, conditioning and humidity controls, however pumping liquid around a closed system is not only far more simpler but it is cheaper to maintain. After all central heating, albeit working in reverse, is deployed in tens of millions of homes around the world and it doesn't require the home owner to spend tens of thousands every year.

Eurotech proceeded to show off the shiny pipework and pumps that were installed underneath the rack. That should be a big selling point for datacenter operators, with watercooling pipe-work placed under floor tiles rather than on the data floor eating up precious space for racks.

Cineca is an impressive facility and one that very nicely presents the challenges of cooling high performance servers. Eurotech's Aurora cluster, with its water cooling showed the future in server cooling, while IBM's BlueGene, despite being faster, seemed something of a relic in these cost conscious times.
comments powered by Disqus