In many parts of the world, including Asia, we are experiencing some of the hottest temperatures on record. Intense heat waves have swept through China, leading the government to issue red alerts advising people not to spend too much time outdoors. In Singapore, temperatures hit nearly the island’s record high in April this year. With a warming planet, the heat is set to get worse with some weather simulations predicting that 600 million to a billion people in Asia will be living in areas with deadly heat waves by 2050.
As the world heats up, the challenge of keeping data centers cool becomes more complex, costly, and power-intensive. Already, data centers are known to be heavy consumers of electricity – globally, data center electricity consumption in 2020 was 200-250 TWh, or around 1% of global final demand of electricity. As data volumes increase, this need will only grow. The region’s green rules for data centers to be more energy efficient further compound the cooling challenge.
Staying cool is not a new challenge for data storage and processing players. Any data center manager will know the need to balance efficient power consumption and consistent temperatures while meeting the needs of a business. While there are many high-end technologies that can help cool components, these can be difficult to implement or retrofit in existing data centers. Fortunately, there are pragmatic and sustainable strategies to explore as part of a holistic solution.
Keep fresh air circulating
It goes without saying, but good air conditioning should be a mainstay in every data center. This is especially important for data centers operating in tropical climates like Southeast Asia, where the urban heat island effect is driving temperatures to new heights. In fact, data center cooling accounts for 35-40% of total data center energy consumption in Southeast Asia.
Ensuring that heating, ventilation and air conditioning systems have a stable power supply is a basic requirement. For business continuity and emergency planning, backup generators are a necessary precaution, for cooling technologies as well as for compute and storage resources. Business continuity and disaster recovery plans should already include provisions for what to do in the event of a power outage (and backup).
If temperatures rise, it is beneficial to use more durable and reliable material. Flash storage, for example, is generally much better able to handle temperature increases than mechanical disk solutions. This means that data remains secure and performance remains consistent, even at high temperatures.
Power Reduction Suggestions
Here are three strategies IT organizations should consider. When combined, they can help reduce data center power and cooling requirements:
More effective solutions:
It’s obvious: every piece of hardware consumes energy and generates heat. Businesses need to look for hardware that can do more for them in a smaller data center. In Singapore, this is even more critical, with the government requiring all new data centers to have a power utilization efficiency of at least 1.3. Increasingly, IT organizations are considering energy efficiency when selecting what happens in their data center. In the world of data storage and processing, for example, key benchmarks now include capacity per watt and performance per watt. Since data storage is a large part of the hardware in data centers, upgrading to more efficient systems can significantly reduce the overall power and cooling footprint of the entire data center.
Now let’s move on to direct-attached storage and hyperconverged systems. Many vendors talk about the efficiency of combining compute and storage systems in a hyperconverged infrastructure (HCI). That’s right, but this efficiency is mainly related to rapid deployments and the reduction of the number of teams involved in the deployment of these solutions. This does not necessarily mean energy efficiency. In fact, there’s a lot of energy wasted by direct-attached storage and hyperconverged systems. For one thing, compute and storage needs rarely grow at the same rate. Some organizations end up over-provisioning the compute side of the equation in order to meet their growing storage needs. The same thing happens occasionally from a storage perspective and in both cases a lot of energy is wasted. If compute and storage are separated, it is easier to reduce the total number of infrastructure components needed and, therefore, reduce power and cooling requirements as well. Additionally, direct-attached storage and hyperconverged solutions tend to create infrastructure silos. Unused capacity in a cluster is very difficult to make available to other clusters, leading to even more over-provisioning and wasted resources.
The legacy approach of provisioning based on the needs of the next 3-5 years is no longer fit for purpose. This approach means that organizations end up managing far more infrastructure than they immediately need. Instead, modern on-demand consumption models and automated deployment tools allow enterprises to easily scale their data center infrastructure over time. Infrastructure is provisioned just-in-time rather than just-in-case, eliminating the need to power and cool components that won’t be needed for months or even years.
Data center cooling most of the time depends on reliable air conditioning and solid contingency planning. But in every installation, every fraction of a degree increase in temperature is also a fractional increase in stress on the equipment. Cooling systems alleviate this stress from racks and stacks, but no DC manager wants to put these systems under additional stress – which rising temperatures have done.
With global warming expected to reach 1.5 degrees Celsius by 2030, organizations must find ways to reduce operating costs, simplify and cool their data centers and reduce their energy consumption, all at the same time. For that to happen, it’s high time for them to make big strides in reducing equipment volumes and heat generation in the first place.