Posted by mstansberry | Posted in Data center design, Data center energy efficiency, Data center media, Uptime Institute Symposium | Posted on 06-05-2011
In this series of Q&As with SearchDataCenter.com’s Steve Bigelow, Uptime Institute weighs in on the top issues facing data center owners and operators today. These interviews also preview next week’s Uptime Institute Symposium content.
Data center construction alternatives: New data center facilities are incredibly expensive to design, build, manage and maintain, but a new build isn’t necessarily the answer in every circumstance. While some organizations can certainly justify the investment in new facilities, there are many others that want options and alternatives that can give them the facilities they need to run their business without breaking the bank or taking years to deploy.
Efficient energy use and energy security in the data center: Efficient energy use has become a central issue in data center design and management. Energy costs are always increasing, and even the availability of power can be a gating issue for data center construction or facility expansion.
Posted by mstansberry | Posted in Data center design, Data center energy efficiency | Posted on 18-04-2011
Earlier this month, social media giant Facebook unveiled details of its data center facilities designs and server hardware plans in what it is calling the Open Compute Project, posting technical specifications and CAD drawings of its custom server hardware and data center MEP components.
The process is groundbreaking, and Facebook should be lauded for its openness and generosity. In the Web-scale data center space, secrecy has been the norm. Facebook’s move will have significant ramifications for the entire data center community – especially if it inspires other highly-efficient data center operators to follow suit.
Facebook’s data center operations have faced unprecedented public scrutiny, with environmental organizations protesting the company’s choice of coal-powered electricity. Facebook’s IT operations touch so many of our lives, it is not surprising that its data centers are of major public interest.
Jonathan Heiliger, VP Technical Operations at Facebook, asked the data center community to weigh in on the designs in a recent video. “Give us feedback, tell us where we screwed up, tell us where we made a bad decision, and help us make it better.”
In the spirit of that request, Uptime Institute Professional Services engineers offer the following feedback:
Facebook’s cooling method water-wasteful in a desert community
“As an engineer from the water-starved west, this is near and dear to me,” said Keith Klesner, consultant with Uptime Institute Professional Services. “The climate in the area is a high desert with average annual precipitation of less than 10 inches. In a region where water is scarce, Facebook has designed the data Center with 100% evaporative free cooling. The local municipality sources all of its water from a shallow aquifer, most likely the same one which Facebook has sunk its wells.”
From Facebook: The direct evaporative system is supplied primarily by an on-site well and secondarily by the normal city water distribution system. Both sources feed into a storage tank. The storage tank provides 48 hours of water in the event well water and city water sources are unavailable.
“For a site considering sustainability and overall corporate social responsibility, my grade for the cooling choice is a D,” Klesner said. “The new Bend Broadband data center down the road in Bend, Oregon is a more sustainable model (using indirect air side economization) given the local climatology. This thread on Facebook’s own pages hits on my exact point. The City of Prineville is small and running out of water. Facebook is working with the City, but aquifers do not often recharge at the rate of extraction.”
“Phase 1 of the Data Center is 30 MW and Ph 2 is TBD. I think a starting consumption estimate could be 10,000 gallons per MW/day putting total water consumption of 300,000 gallons per day. That’s about 10% of the total city water, which will rise significantly on phase 2 of the project. The designer has the exact volume calculations, but the sourcing issue is the heart of the matter. The City of Pineville will run out of water from current sources somewhere from 2015-2017. Their solution will be to drill to a deeper aquifer which will likely be subject to overuse in the future,” Klesner said.
Facebook data centers vulnerable to downtime
“Wildfires, dust and volcano ash happen,” Klesner said. “In the case of extreme outdoor contaminants the data center will shut down.”
From Facebook: We acknowledge that this is a condition that can cause potential shutdown. We already have filtration installed and will run evaporative cooling at full capacity to reduce smoke and particulates in the event of a fire or contamination. Then, depending on intensity, we can utilize time for orderly shutdown, or else run for a prolonged period of time at minimum OA. We have a provision for a closed-loop system that uses indirect cooling.
The high desert east of the Cascades Mountains burns every summer. It’s only a matter of time before Facebook has to deal with this issue.
Facebook has said that the Uptime Tier Classification System does not apply to their Prineville data center. But, you would think the organization might be less cavalier about potentially disruptive vulnerabilities at the facility that supports the primary line of business.
In fact, the details of the Facebook data center design emphasize just how effective Tiers are at rating data center investment in term s of performance potential. Some of the facilities details reveal a fairly typical cost-focused rather than performance-minded data center design.
For example, Facebook’s backup generators are a potential vulnerability. “The document states the engine-generators are Standby rated,” Uptime Institute Professional Service consultant Christopher Brown pointed out. “This will impact the ability of the units to support the facility for long-term power outages as the Standby rating has yearly runtime limitations. The engine-generators typically are used for reliable power supply when performing UPS maintenance. Regular testing of the units, maintenance of other critical equipment may impact the units’ ability to support a long term power outage or long term failure of a UPS system.”
Lastly, much of the mechanical infrastructure does not lend itself to Concurrent Maintainability. “The large bus duct (1000 amps and above) are generally constructed with bolt together sections and thus allow for maintaining of the bus sections. But smaller bus duct to deliver power to the servers does not typically utilize bolt together sections and instead uses press fit connections. These connections are not maintainable and thus create an operational problem long term,” Brown said.
On the facilities side, inconsistent maintenance opportunities on select and constrained performance potential in their engine generators yields an overall Tier II rating. These are fundamental constraints that will impact long-term operations. It is important to go to the heart of the Tiers: business case.
The key takeaways from this analysis:
-Working backwards from the facilities design, Facebook’s IT operations at its Prineville, OR data center may be core to its business, but the company is willing to tolerate downtime.
-While Facebook’s Prineville data center is energy efficient, it has a long way to go to call itself green.
“The term ‘green’ cannot just be about reducing electrical power consumption. It has to involve the natural resource limitations of the local area. Green must be centered on designing data centers that minimize the consumption of all natural resources not just one,” Brown said. “Any green approach should be designed to minimize energy consumption while not increasing strain on other vital resources. Otherwise we trade one problem for another”
Continue the dialogue at Uptime Symposium
Facebook’s data center operations team will give the keynote address Wednesday May 11th at Uptime Institute Symposium, with a presentation: Facebook’s Latest Innovations in Data Center Design, featuring Facebook’s Jay Park, Director, Data Center Design Engineering and Facilities Operations, Thomas Furlong, Director of Site Operations, and Daniel Lee, Data Center Mechanical Engineer.
Posted by mstansberry | Posted in Data center design, Uptime Institute Symposium | Posted on 13-04-2011
If you haven’t heard, Facebook is open-sourcing its data center facilities designs and server hardware plans in what its calling its Open Compute Project, posting technical specifications and CAD drawings of its data center MEP components. Kevin Heslin at Mission Critical wrote a column on Facebook’s Open Compute Project, and Rich Miller has a great listing of the commentary from around the Web.
Don’t just read about it though! Come meet the Facebook data center team at Uptime Symposium. Facebook will be the keynote Wednesday May 11th at Symposium, with a presentation: Facebook’s Latest Innovations in Data Center Design, featuring Facebook’s Jay Park, Director, Data Center Design Engineering and Facilities Operations, Thomas Furlong, Director of Site Operations, and Daniel Lee, Data Center Mechanical Engineer.
Posted by mstansberry | Posted in Data center design | Posted on 04-04-2011
Last week, a colleague asked my opinion on the topic of data center trends spanning a decade — top issues five years ago, the top issues today, and the top issues in five years. I posted my responses below, which are largely based on trending topic popularity from my previous job at a data center publication since 2005.
What were the most pressing data center management issues 5 years ago?
-Managing physical requirements for high density server deployments.
-Understanding data center site selection criteria.
-Deploying server virtualization.
-Disaster recovery planning and testing.
What are they today?
-Measuring/managing energy efficiency
-Capacity planning in a budget constrained economy
-Managing virtual server sprawl and systems management nightmare
-Tracking a volatile provider landscape, avoiding lock-in or merger acquisition of a trusted vendor
What will they be in 5 years?
-Carbon reporting and mitigation
-Managing/selecting cloud computing providers
-Enabling IT-based application resilience, moving away from physical redundancy .
-Implementing governance on a widely distributed and virtualized portfolio of business services.
Systems management tools that integrate IT and Facilities assets, automate manual processes, manage IT services (i.e. applications) across internal and external IT resources will become more necessary (and finding the right ones more difficult) as the data center industry evolves.
Feel free to weigh in on the predictions in the comments, or on Twitter @UptimeInstitute.com.
Posted by mstansberry | Posted in Data center design, Data center energy efficiency, Data Center Metrics | Posted on 31-03-2011
Five years ago the data center industry faced a crisis: Data centers were running out of capacity, the mechanical infrastructure couldn’t handle the widespread and rapid shift to high-density hardware, and minimally utilized servers sprawled out of control.
And bigger challenges loomed on the horizon: Scarcity of cheap power, pending regulation, and increased public scrutiny of data center energy use.
The first respondents to this crisis demonstrated that data center design and operations could evolve significantly to meet those challenges.
The first respondents also developed best practices and metrics for measuring energy efficiency. Server virtualization has been widely implemented, improving IT utilization. Forward-thinking data center managers have become better stewards of their companies’ resources, and the planet’s.
Today, Facebook has 100,000 users demanding its data centers unfriend coal after news broke that the social media giant had chosen a utility provider with primarily coal-based power generation. Our discussions with one of the world’s top banks reveals heightened sensitivity in that industry to the public relations impacts, and business consequences, of energy use in the data centers.
For many companies, green is a competitive differentiator driving data center consolidation efforts, closer scrutiny of IT capacity management, and efficiency-minded engineering solutions. Other companies are running out of data center space while they’re still dragging themselves out of the economic crisis.
Ignoring data center efficiency is no longer an option.
The tools and best practices are available for data center owners and operators to wring every drop out of existing data center assets, and to design new data centers in the most cost- and energy-efficient manner possible.
Actionable advice, low-cost improvements, self-funding projects
For the past several years, Uptime Institute has developed a body of knowledge for data center owners and operators to improve data center efficiency. Many of these time honored best practices haven’t changed, nor require significant investment.
Uptime Institute recommends all data center owners and operators take a staged approach to energy efficiency. Starting with low-cost, low-risk efficiency improvements—data center managers can reap huge savings from existing facilities without any new or expensive techno-fixes.
The following documents provide actionable advice for data center managers to get started:
-IT and Facilities Initiatives for Improved Data Center Efficiency: Ten initiatives for data center operators to reduce energy-related capital expenses across facility and IT systems.
-How to Meet “24 by Forever” Cooling Demands of Your Data Center: 27 data center cooling best practices to improve reliability and efficiency.
-The Invisible Crisis in the Data Center — The Economic Meltdown of Moore’s Law: Provides economic argument for improving data center efficiency, and recommendations for achieving those goals.
The next step: Integrating IT and data center operations
Data center facilities managers and executives have led the first charge to improve data center energy efficiency. Future improvements in data center efficiency will depend on incentivizing IT practitioners to take the next steps.
IT operations staff can drive exponential improvements in data center efficiency and effectiveness. IT organizations that are willing to take a systematic approach, starting at the application and data layers – consolidating applications and servers, de-duplicating data, removing comatose but power-draining servers, building redundancy into the applications and IT architecture rather than physical systems — will drive the next wave of efficiency gains.
The following documents provide advice for integrating data center facilities and IT operations teams:
Data Center Energy Efficiency and Productivity: An introduction of the concept of the Integrated Critical Environments Team, plus five self-funding short-term initiatives to improve data center efficiency.
-ITIL — How to Manage the Coming Convergence of IT and Facilities: Using the Information Technology Infrastructure Library (ITIL) framework to create an integrated IT and Facilities team.
Uptime Institute’s role and publication plan going forward
Over the past five years, the data center industry has coalesced around new standards, best practices, metrics and recognition programs. Uptime Institute has partnered with industry standards bodies, and will provide technical advice, an industry test bed and global perspective on these standards, best practices and metrics.
Drawing on the expertise of its Network, staff of distinguished engineers, body of intellectual property and the hundreds of Uptime Institute Accredited Tier Specialists and Accredited Tier Designers around the globe, Uptime will deliver publications to help data center owners and operators evaluate and implement data center efficiency metrics, best practices and recognition programs.
Look for new step-by-step data center efficiency guidance in the coming months for Uptime Institute Members. Also, the Uptime Institute will be serving as a test-bed for industry standards in development. This will work to ensure that the owner’s perspective is incorporated into the development process and that the methodologies are proved in real-life, real-world operational data centers.