The world’s data centres are using more and more power as we build larger and larger facilities. The question is - will the growth in data centres, with their ever-increasing power demands, soon outweigh the global capacity to supply their electricity needs? At what point will the crux be, where we can no longer continue in this direction and need to change?
The Data Centre Power Crux will become a focus for the industry as we continue to build bigger facilities, consuming more power, to feed the global appetite for data.
Our hunger for all aspects of data and the need for data centres is growing exponentially.
This will be further fuelled as those in the world who aren’t yet connected, become connected. We use data every day in western society: gaming, streaming, the finance sector and industry are all driving this development with new data streams, services and infrastructure.
Things such as streaming, gaming, cloud computing and IoT have made access and the quantity of data explode. We are at a time where our dependence on this data has never been so symbiotic.
In America alone, data centre electricity consumption is projected to increase to roughly 140 billion kilowatt-hours annually by 2020, the equivalent annual output of 50 power plants, costing businesses $13 billion annually in electricity bills and emitting nearly 100 million metric tons of carbon pollution per year.
Image source: Nature: International Journal of Science, Sept 2018
The IT factories of the technology era
Data centres have become the IT factories of the current technology era. From the early days of dedicated computer and tape storage facilities, to the development of global IT and internet with associated connected services, there has been ongoing growth in the worldwide demand for data.
Early data centres were initially focused on maintaining the correct environment for the IT equipment, but as systems become more critical, reliability and resilience of the power and cooling infrastructure became more important.
In the past 10 years, data centres’ power density has gone from 1kW/m2 to 4kW/m2, server racks are now averaging power density of up to 10kW each and growing, and the data centres themselves are getting bigger as we install more and more racks. Ten years ago, a 10MW data centre was considered large; now with the emergence of cloud computing, data centres are measured in hundreds of MWs.
The drive has become to supply MWs of IT capacity at increasing density. The higher power densities and total power capacities have led to a need to focus on efficiency of operation, with most emphasis on the data centre PUE (power usage effectiveness) - a measure of the total power delivered to the data centre relative to the ‘useful’ power delivered to the server.
The Power Crux
This increasing data demand and power consumption is putting pressure on power systems. If this growth is extrapolated, we will reach a point where the pressure on the global ability to supply and distribute power is no longer sustainable in cost of utility connection, cost of power, the impact on resources and impact to the planet.
Current growth is unsustainable
Current data centre design is focused on bigger and bigger facilities to support the growth in cloud services, data and global connection. With finite resources, growing population and increasing understanding of our impact on the planet, this growth is unsustainable.
We are faced with increasing numbers of users of data and in the processes using data.
Access to the global source of data, processing and storage is accepted by the United Nations as a basic human right, and access to the internet for the rest of the unconnected population is a driving focus. Now, coupled with growing data traffic and storage, and processes and services, the current global standard is that our lives, the facilities we use and the services we rely on demand access to data and grinds to a halt without it.
For some time, we have been focused on improvements in PUE. PUE has been driven from 2.0+ a few years ago to as low as 1.15 today in the right circumstances. This reduction has been achieved by designing energy efficient cooling and electrical systems. We have separated hot and cold air streams through aisle containment; we are using variations of free cooling systems, allowing our data halls to run hotter, specifying energy efficient equipment such as UPSs, transformers and so on.
All of this has reduced PUE - saving millions of dollars in electricity costs for data centre operators - but we are still seeing upward pressure on power consumption due to the scale of modern data centres. The reduction of PUE is no longer enough to counteract the increasing pressure for data power consumption.
This is just not sustainable in cost of infrastructure, the impact on the environment and the drain on the global resource pool.
The cutting-edge data centres have been focused on power efficiency and cost effectiveness. We are driving designs to be more cost effective in terms of installed capital as well as ROI by getting the most out of our data centres, but we still have not got the right mindset. In the main, we sell data centre space based on usage of power which provides little or no incentive to make things better.
On the one hand, the future has to have a renewable focus and the industry is certainly rising to that challenge. In 2018 Apple and Google both reported that they are meeting their electricity demands through renewable sources. Facebook committed to being 100% renewable by 2020. AWS is building wind farms and solar farms just to offset its data centre power usage. Google this year announced the largest corporate purchase of renewables, bring their total renewable energy portfolio to 5,500MW.
But renewable energy sources aren’t enough. We need to shift our focus to the servers themselves, to the computer technology that’s using so much power. Rather than providing more power to meet the demands of the servers, we need to develop servers that use less power.
We need to look at the computer technology being used in the servers. We have been using the same silicon chip-based technology for 50 or 60 years. Gordon Moore, co-founder of Intel, said in 1965 that the speed and performance of computers would more or less double every couple of years and he was correct. But efficiencies that have been gained are coming to an end. Moore’s Law has all but run its course. We have run into a physical barrier of transistor limitation.
What are the next technologies?
IBM, Google and Intel are in a race to develop the Quantum Computer. Quantum computing has the ability to process exponentially more data than a classical computer through the quantum state know as superposition, while at the same time using significantly less energy.
Quantum computers use cryogenic refrigerators to operate at extremely low temperatures. At these temperatures, superconducting takes place and electricity is conducted with virtually no resistance and therefore virtually no power consumption or heat emission.
Other technologies are also being developed and considered to replace classical computers such as Photonic Computing where light is used instead of electricity, and Neuromorphic Computing which uses a completely different and energy efficient way of building and operating a computer.
While these types of technologies are possibly decades away from being commercially and practically available to replace classic computers, we need to continue to develop them to offset the power-hungry way we are going.
If we buy into the science of global warming it should be enough. We have a small planet with limited resources. If we do nothing, the chance to decide our future will slip away and we may be left with a situation we didn’t plan.