Earlier this month, social media giant Facebook unveiled details of its data center facilities designs and server hardware plans in what it is calling the Open Compute Project, posting technical specifications and CAD drawings of its custom server hardware and data center MEP components.
The process is groundbreaking, and Facebook should be lauded for its openness and generosity. In the Web-scale data center space, secrecy has been the norm. Facebook’s move will have significant ramifications for the entire data center community – especially if it inspires other highly-efficient data center operators to follow suit.
Facebook’s data center operations have faced unprecedented public scrutiny, with environmental organizations protesting the company’s choice of coal-powered electricity. Facebook’s IT operations touch so many of our lives, it is not surprising that its data centers are of major public interest.
Jonathan Heiliger, VP Technical Operations at Facebook, asked the data center community to weigh in on the designs in a recent video. “Give us feedback, tell us where we screwed up, tell us where we made a bad decision, and help us make it better.”
In the spirit of that request, Uptime Institute Professional Services engineers offer the following feedback:
Facebook’s cooling method water-wasteful in a desert community
“As an engineer from the water-starved west, this is near and dear to me,” said Keith Klesner, consultant with Uptime Institute Professional Services. “The climate in the area is a high desert with average annual precipitation of less than 10 inches. In a region where water is scarce, Facebook has designed the data Center with 100% evaporative free cooling. The local municipality sources all of its water from a shallow aquifer, most likely the same one which Facebook has sunk its wells.”
From Facebook: The direct evaporative system is supplied primarily by an on-site well and secondarily by the normal city water distribution system. Both sources feed into a storage tank. The storage tank provides 48 hours of water in the event well water and city water sources are unavailable.
“For a site considering sustainability and overall corporate social responsibility, my grade for the cooling choice is a D,” Klesner said. “The new Bend Broadband data center down the road in Bend, Oregon is a more sustainable model (using indirect air side economization) given the local climatology. This thread on Facebook’s own pages hits on my exact point. The City of Prineville is small and running out of water. Facebook is working with the City, but aquifers do not often recharge at the rate of extraction.”
“Phase 1 of the Data Center is 30 MW and Ph 2 is TBD. I think a starting consumption estimate could be 10,000 gallons per MW/day putting total water consumption of 300,000 gallons per day. That’s about 10% of the total city water, which will rise significantly on phase 2 of the project. The designer has the exact volume calculations, but the sourcing issue is the heart of the matter. The City of Pineville will run out of water from current sources somewhere from 2015-2017. Their solution will be to drill to a deeper aquifer which will likely be subject to overuse in the future,” Klesner said.
Facebook data centers vulnerable to downtime
“Wildfires, dust and volcano ash happen,” Klesner said. “In the case of extreme outdoor contaminants the data center will shut down.”
From Facebook: We acknowledge that this is a condition that can cause potential shutdown. We already have filtration installed and will run evaporative cooling at full capacity to reduce smoke and particulates in the event of a fire or contamination. Then, depending on intensity, we can utilize time for orderly shutdown, or else run for a prolonged period of time at minimum OA. We have a provision for a closed-loop system that uses indirect cooling.
The high desert east of the Cascades Mountains burns every summer. It’s only a matter of time before Facebook has to deal with this issue.
Facebook has said that the Uptime Tier Classification System does not apply to their Prineville data center. But, you would think the organization might be less cavalier about potentially disruptive vulnerabilities at the facility that supports the primary line of business.
In fact, the details of the Facebook data center design emphasize just how effective Tiers are at rating data center investment in term s of performance potential. Some of the facilities details reveal a fairly typical cost-focused rather than performance-minded data center design.
For example, Facebook’s backup generators are a potential vulnerability. “The document states the engine-generators are Standby rated,” Uptime Institute Professional Service consultant Christopher Brown pointed out. “This will impact the ability of the units to support the facility for long-term power outages as the Standby rating has yearly runtime limitations. The engine-generators typically are used for reliable power supply when performing UPS maintenance. Regular testing of the units, maintenance of other critical equipment may impact the units’ ability to support a long term power outage or long term failure of a UPS system.”
Lastly, much of the mechanical infrastructure does not lend itself to Concurrent Maintainability. “The large bus duct (1000 amps and above) are generally constructed with bolt together sections and thus allow for maintaining of the bus sections. But smaller bus duct to deliver power to the servers does not typically utilize bolt together sections and instead uses press fit connections. These connections are not maintainable and thus create an operational problem long term,” Brown said.
On the facilities side, inconsistent maintenance opportunities on select and constrained performance potential in their engine generators yields an overall Tier II rating. These are fundamental constraints that will impact long-term operations. It is important to go to the heart of the Tiers: business case.
The key takeaways from this analysis:
-Working backwards from the facilities design, Facebook’s IT operations at its Prineville, OR data center may be core to its business, but the company is willing to tolerate downtime.
-While Facebook’s Prineville data center is energy efficient, it has a long way to go to call itself green.
“The term ‘green’ cannot just be about reducing electrical power consumption. It has to involve the natural resource limitations of the local area. Green must be centered on designing data centers that minimize the consumption of all natural resources not just one,” Brown said. “Any green approach should be designed to minimize energy consumption while not increasing strain on other vital resources. Otherwise we trade one problem for another”
Continue the dialogue at Uptime Symposium
Facebook’s data center operations team will give the keynote address Wednesday May 11th at Uptime Institute Symposium, with a presentation: Facebook’s Latest Innovations in Data Center Design, featuring Facebook’s Jay Park, Director, Data Center Design Engineering and Facilities Operations, Thomas Furlong, Director of Site Operations, and Daniel Lee, Data Center Mechanical Engineer.