Ways to Strike Back Against Data Center Power Inefficiency
Annie Paquette
January 23, 2020
- Categories:
- Industry Trends and Solutions
- Tags:
In our recent blog about data center power efficiency, we talked a lot about the new role of artificial intelligence (AI) in the data center. More specifically, we explored how it has been applied to solving the PUE challenge. As mentioned, Google and others have leveraged AI to study their data center environmental monitoring system’s performance data relative to power and cooling cycles within their facilities to produce a profile of their energy usage.
Through artificial intelligence, Google’s energy profiles then became algorithms that allowed their facility managers to apply well-timed instructions to the building’s mechanical and electrical plants. While this is all pretty easy to understand, it glosses over the fact that this hyperscale provider was already hyper-efficient.
Google’s AI journey began at a point on the efficiency journey that most of us have to work hard to attain. For those of us whose focus is on getting to a sub-1.5 PUEs, here are five thoughts on current practices for designing efficiency into new data center facilities.
- New LED lighting: continued advances in lighting technology have not only driven better visibility in the rack row but also allowed operators to eke out more energy savings.
- Higher operating temperatures: once controversial, the idea of your cold aisles running warmer has become possible through broader operating ranges of IT equipment, and advances in remote monitoring technologies.
- Free air cooling versus CRAC or CRAH: rethinking cooling has shifted the geographic position of data centers around the globe to more northern latitudes, optimizing the number of free cooling days available to the facility. When evaluating sites, keep in mind a change of venue can have a tremendous impact on the bottom line.
- Distributing at higher voltages: Three-phase distribution is more efficient, and implementing it at higher voltages makes it even more so. Manufacturers of IT gear and electrical equipment have spent the decade making more products available that support higher voltages within the data center space, including Server Technology’s 415V power distribution unit(s).
- Workload consolidation through containerization: containers have allowed for computing in an even smaller footprint, reducing the need to provide cooling for large volumes of space, while virtualization has reduced the number of computing devices needed in the first place.
While the promise of better data center efficiency through artificial intelligence is a reality, there are still plenty of measures that can be taken to reduce the PUE of many sub-hyperscale facilities. At Server Technology, we are focused on helping our customers leverage their resources to get our industry to an all-time low. Low PUE, that is.
Reference: https://deepmind.com/blog/deepmind-ai-reduces-google-data-centre-cooling-bill-40/
Thanks for your submission. One of our Power Strategy Experts will get back to you shortly.