From Compromised to Optimized: ACE Performance Assessment Case Study

FFL04_Header Screenshot 2014-04-09 16.53.44

Download the paper click here:

This paper, written for CxOs and senior managers in the data center owner-operator business, describes how Future Facilities’ ACE performance score and predictive modeling were used to save $10 million in one data center. It follows on from Five Reasons your Data Center’s Availability, Capacity and Efficiency are being Compromised and describes how Future Facilities achieved these savings in a three-stage process: assess, improve, maintain (AIM). The case study outlines the work conducted for a major financial institution. It involved assessing and improving a single 22,000ft2 Tier 4, mission critical data center.

The second paper introduces the concept of the ACE performance assessment service, a tiered consultancy service from Future Facilities that can be applied to both the design and operational phases of a data center’s lifecycle. This engineering service redresses the imbalance between the owner-operator’s aspirational goals and what their facility can achieve on a day-to-day basis.

Screenshot 2014-04-07 13.39.07Screenshot 2014-04-07 13.38.54

 

Introduction Download the paper click here:

In the design and operational phases of data center management, there is a continuing need to meet business goals – from reducing costs to achieving optimal performance and operational flexibility.

How well a facility meets the performance demands of several stakeholder groups is ultimately decided by three intertwined variables: availability, physical
capacity and cooling efficiency (ACE).
In our previous paper, Five Reasons your Data Center’s Availability, Capacity and Efficiency are being Compromised, we established the main causes of low capacity utilization, increased downtime and cooling inefficiencies, and the impact they have on your costs. The solution, as our customers have learned, is to manage ACE sustainably.
Future Facilities’ ACE performance score – a way of assessing how compromised
your data center has become and how much operational flexibility it can offer you
- allows you to do exactly that. To demonstrate this, we’ve written this paper to
illustrate, through a real life example, how the score is today being used to meet
owner-operators’ aspirational goals.
Before reading on, it’s really important to understand ACE: decisions that you
make with regards to one aspect of ACE performance will impact upon the others.
Crucially, they may do so with potentially unforeseen consequences. So, if your
managers make a change to improve availability, they must be able to confidently
plan for the impact that will have on physical capacity and cooling efficiency.
Despite this, the vast majority of owner-operators currently rely on fairly simple
performance indicators such as PUE (Power Usage Effectiveness), which are
just not capable of considering the complex ACE relationship. By contrast, the
ACE performance score approaches the performance challenge holistically. It
quantifies, and allows you to visualize, your ACE performance gap: the difference
between the performance you’re paying for and the performance you’re actually
getting day to day.

Download the paper click here:

 

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Five Reasons your Data Center’s Availability, Capacity and Efficiency are being Compromised

FFL01_Header

Screenshot 2014-03-14 13.04.46

Executive Summary
Data center owner-operators are increasingly looking for solutions to minimize total cost of ownership, cost per kW of IT load, and downtime. This paper explains the five main
contributors to runaway data center costs, then introduces the ACE performance score and the continuous modeling process. Using both, this paper briefly explains how they are helping owner-operators save millions of dollars annually per data hall.

Download and read this new white paper here

Introduction
Could ‘minimize’ be the verb that best sums a data center owner-operator’s
ultimate objective?

Think about it, whatever business you’re in, and whichever type of data center(s)
you own, you almost certainly want to minimize one or more of the following:

• Cost overruns
• TCO (total cost of ownership)
• Cost per kilowatt ($/kW) of IT load
• Downtime

In an industry where the average TCO overspend is around $27m per MW, where $/kW can spiral out of control within just a few short years of entering operation, and where the average cost of downtime is $627k per incident, owner-operators want solutions.

Poor planning and inefficient use of power, cooling and space represents a
significant threat to your efforts to minimize costs. Yet it is precisely this that so
often forces you into a corner – build a new facility to take the strain, or invest in
a major overhaul. Neither ‘solution’ is attractive, so why are owner-operators so
frequently in a position where their aspirations are never realized?

In this paper, we set out to not only answer that question, but to also offer a
solution going forward.

First, we identify the five major contributors to increased costs and downtime. Then we propose that the greatest opportunity to minimize these can be achieved by adopting a simple, inexpensive solution: the ACE performance score.

The ACE performance score is unique way of assessing and visualizing the three
critical indicators of data center performance, as described below. It works by
mapping data from DCIM toolsets into a powerful 3D Virtual Facility model. With
that automated process accomplished, it simulates the resulting distribution of
airflow and temperature in the space. This confluence of predictive modeling and
DCIM data is called Predictive Modeling for DCIM.

The ACE performance score can be used from inception through operation, and
it considers the dynamic interrelationship of the three variables – ACE – that
ultimately dictate how well a data center performs and, by extension, how costly
it is to run:

• Availability (A) of IT, including during power and cooling failures
• How much capacity (C) is available to install, power and cool additional IT
• How efficient (E) the cooling delivery is to the IT

With the ACE performance score introduced and explained, we conclude by
introducing a simple business process through which ACE can be easily applied:
continuous modeling.

Download and read this new white paper here

Screenshot 2014-03-14 13.05.40

Screenshot 2014-03-14 13.05.09

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , , , , , , , , , , , , , , | Leave a comment

Continuous Modeling for Design and Operation of Datacenters

Screenshot 2014-03-04 11.30.01

Recorded Webcast: Continuous Modeling for Design and Operation of Data Centers

Power, IT assets, PUE and temperatures are the focus of data center monitoring today. But, Capacity (the ability of the facility to support IT equipment) is being overlooked. At any point in time, 30% or more of data center capacity is unobtainable. The cost of reclaiming this loss increases exponentially as the IT configuration is built out.

At “Continuous Modeling for Design and Operation of Data Centers”, see examples of a new technique for monitoring Capacity. Also learn how Predictive Modeling is used to solve this problem before the data center is built out and corrective actions become limited and expensive.

Lessons learned:

• Capacity utilization problems make their appearance on day 1 of operations

• Without monitoring of capacity, they go undetected until IT availability becomes an issue

• The cost of addressing Availability issues is orders of magnitude more than fixing Capacity issues proactively.

To view the webcast click here

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

451 Recorded Webcast: Why an energy efficient datacenter may not be the most profitable

Screenshot 2014-03-04 11.02.24Screenshot 2014-03-04 11.25.25Screenshot 2014-03-04 11.11.56Screenshot 2014-03-04 11.09.31

To view the recording click here

Many data center facilities focus on improving the infrastructure efficiency, measured by PUE (Power Utilization Effectiveness), but a single minded focus on that metric will often yield perverse results, such as low equipment utilization, stranded capacity, and wasted capital. Even straightforward “energy saving” equipment replacements, like lower power air handling units, can result in stranded capacity, hot spots, and equipment reliability problems, even though the throughput for the fans is ostensibly the same as that of those they replace.

It’s the business results that should matter to data center owners, not improvements in imperfect efficiency metrics like PUE or fan output. That means a focus on both the cost per computation and the total revenues from computation, detailed analysis of efficiency improvements in both computing and infrastructure, and sensible application of computer simulation tools to understand how changes in information technology deployment will affect the utilization of data center infrastructure over time.

This webinar will explore a more holistic, business-focused view of data center efficiency grounded in the key performance indicators that should be most important to the companies that own data centers.

Register to view the recording click here

Webinar_040314

The Speakers:

BioPicAndrew Donoghue is the European Research Manager at 451 Research. He leads the firm’s involvement in a number of European Commission-funded IT research projects, including CoolEmAll (www.coolemall.eu) and RenewIT (www.renewIT-project.eu), and works closely with the Uptime Institute, a division of The 451 Group.
Andrew is the author of several major reports covering eco-efficient IT; power management; policy, legislation and compliance; and datacenter management and energy-efficiency. He has represented 451 Research at the Green Grid and other major datacenter events.
Before joining 451 Research, Andrew worked with several leading technology and environmental publications as a writer and project consultant, specializing in green IT and related technologies. He has also held senior editorial roles at a number of business publishing companies, including CBS Interactive and Incisive Media.
Andrew has been closely involved in research and fundraising for technology reuse in developing countries, and the benefits for education, via organizations such as Computer Aid International. He has a degree in Zoology and Environmental Science from the University of Liverpool.

BioPicJonathan Koomey is a Research Fellow at the Steyer-Taylor Center for Energy Policy and Finance at Stanford University, worked for more than two decades at Lawrence Berkeley National Laboratory, and has been a visiting professor at Stanford University (2003-4 and Fall 2008), Yale University (Fall 2009), and UC Berkeley’s Energy and Resources Group (Fall 2011). He was a lecturer in management at Stanford’s Graduate School of Business in Spring 2013. Dr. Koomey holds M.S. and Ph.D. degrees from the Energy and Resources Group at UC Berkeley, and an A.B. in History of Science from Harvard University. He is the author or coauthor of nine books and more than 200 articles and reports. He’s also one of the leading international experts on the economics of reducing greenhouse gas emissions, the effects of information technology on resource use, and the energy use and economics of data centers. http://www.koomey.com/

BioPicSherman Ikemoto is a Director for Future Facilities Inc. a leading supplier of data center design and modeling software and services. In this role, Sherman is leading an effort in the US to educate the market about the need for Continuous Modeling in the design and operational management of data centers. Prior to joining Future Facilities, Sherman had worked at Flomerics, Inc. as a sales, marketing and business development manager. Previous to that, Sherman designed military electronics for the US Government. Sherman has over 20 years experienced in the field of thermal-fluids and electronics cooling design. Sherman has a Bachelor’s of Science degree from San Jose State University and Masters in Mechanical Engineering from Santa Clara University.

Register to view the recording click here

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Continuous Modeling Case Study – Datacenter Dynamics New York March 11

Christian Pastrana will discuss a highly lucrative, continuous modeling in operation case study on March 11

Date and Time: 2:10-2:40 pm  Hall 6 

CONTINUOUS MODELING IN OPERATION: CASE STUDY – HOW A GLOBAL BANK’S DATA CENTER SAVED $10 MILLION

This presentation will outline a series of steps that enabled the bank’s data center to best analyze and enhance data center performance.

 imagesScreenshot 2014-03-07 14.30.45

During his discussion, Mr. Pastrana will explore how a global financial institution used a predictive approach in their operations to increase efficiency, resilience and to maximize useable data center capacity in their facility. By building and calibrating a Virtual Facility for their data center, the bank’s facility was able to undertake a project that resulted in significant energy savings and an increase in usable capacity.

“This case study illustrates how data center operations were able to meet business objectives through continuous modeling,” said Christian Pastrana PE, Regional Sales Manager for Future Facilities, “The Virtual Facility provides simulation techniques to predict and visualize the outcome of power and cooling solutions before critical IT equipment is installed. This foresight, as the study demonstrates, can lead to immense energy and financial savings.”

In addition to its featured discussion, Future Facilities will also be exhibiting at Data Center Dynamics, Booth #43.

For those not able to attend the event, you can visit www.futurefacilities.com/media/info.php?id=240 to download the case study.

Screenshot 2014-03-07 14.30.33

Screenshot 2014-03-04 15.00.38

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , , , , , , , , , , , , , , | Leave a comment

The modern data center : integrated design, mass production + experience | #OCPSummit

You can watch the entire segment in this video.

logo-silicon

 

 

panel2

During the fifth edition of Open Compute Summit, Jon Koomey (Research Fellow, Stanford University) moderated a Keynote Panel titled “Bringing Integrated Design, Mass Production, and Learning by Doing to the Datacenter Industry” where he invited on stage three illustrious speakers: Sherman Ikemoto (Director, Future Facilities), Kushagra Vaid (General Manager, Cloud Server Engineering, Microsoft) and Jim Stogdill (General Manager, O’Reilly Media).

Koomey began by acknowledging the irony that, despite information technology being a driver for efficiency improvements throughout the economy, buying large hasn’t affected the way IT itself has been provisioning. Specifically referring to the Enterprise IT, he mentioned the lack of transparency of total costs of IT at the project level or the business unit level and the low utilization, plus the fact that 10-20 percent of servers are comatose.

“What lessons are there from the open source software community for transforming IT to become more efficient inside the enterprise?” asked Koomey.

“It’s almost a misdirected question,” replied Stogdill. “We focus a lot on the openness, which is an enabler for a lot of things, but ultimately the question is about ‘How do we keep the hurdle low to adopt new things?‘ In the open source software space it was really about low hurdle for adoption, about low cost, try-before-you-buy, low exits in terms of no proprietary lock-ins. In the open hardware space, at least in the horizontal compute space, probably the most interesting thing right now is being able to enter your credit card and get an EC2 instance. It’s probably the thing that’s most parallel to what we are talking about here,” said Stogdill.

jim-stodgill

He continued: “From an Open Compute perspective, the question is ‘How do we lower the hurdles across the board, not just with open source and open culture, but with other models that make it easier to adopt these things?

“It sounds like it’s not just a hardware problem, noted Koomey. “In Open Compute it’s also a software problem.”

Stodgill disagreed: “We have to be careful not to overstate the parallels between open source: with open software I can download and try it with zero friction; hardware, at the end of the day, still needs to show up in a crate. We should be asking ourselves what model could make that process as simple as possible.”

Scaling data centers

“Microsoft has been driving forward with really large integration,” said Koomey. He invited Vaid to “talk a little about the challenges and benefits.”

Vaid obliged: “In the early days, when Microsoft was scaling its data centers, we realized that, unless there was a process where we could have consistency across the different life stages of the design, supply chain and operations, it would really be difficult to ensure that we meet our time-to-market goals, our efficiency goals and also keep costs under control. At a high level, we break it down into those three areas.”

kushagra-vaid
“For the design front, one of the key principles is having modularity, because the facility has a typically 15-year of life, technologies change, and it should be really easy to introduce new technologies over the life-cycle. This applies to mechanical design, power-electrical design,  control software, EPMS, TCIM etc.” continued Vaid.

“On the supply chain side, how the servers get deployed from the dock to the time they go live, there needs to be a very streamline process to take care of that.

On the operations side, you can’t fix what you can’t monitor, so it’s very important to have all kinds of monitoring (power, performance), the utilization metrics, and feed that into a machine learning system, which can detect patterns for you and find out when you’re operating below efficiency levels,” Vaid stated.

Metrics and data center performance

After hearing so much about measurements, the panel moderator wanted to tackle the issue of metrics and link it back to business performance.

“There are different metrics at different stages of the life-cycle. During the design, power efficiency and agility metrics, during the deployment phase – how long it takes for a server to go live, during the operations – how efficiently are you running,” said Vaid.

  • A metric for any kind of applications

“Any measurements of costs per transactions? Energy efficiency per transaction, profit per transaction”? asked Koomey

“We typically do that on a port application business. For running web search they’ll be a metric that will take into account the cost of running a web search based on the facility metrics. For running Windows Azure, there will be a metric about the costs of hosting a VM. There’s a metric for any kind of applications,” stated Vaid.

“Sherman, you know a lot about existing facilities, and how apparently logical decisions can lead to deployment and to strengthened capacity inside existing enterprise facilities. Can you talk about how software can mitigate those kinds of issues?” asked the moderator.

“My expertise is more on the physical side of the data center, and getting full utilization of the physical capacity of the room – whether you’re talking about space, or the power that’s being provided, or the cooling provisioning to the data center. There are
emerging standards now that will help various silos of the data centers management teams manage their own portion of the data center.  There’s software-based standards and measurement-based standards, like PUE (Power usage effectiveness), but the challenge in integrating all those various strands or silos of management into a single, overarching metric. There is software available to do that now. It’s one level higher than where the industry is today. It will involve new software technology which is modeling (you need to know how the various sub-components of the data center are interacting with each-other at a system’s level),” stated Ikemoto.

sherman-ikemotoHe went on to present the Use-case of a company in the UK: “On the facility side, for the 20,000 square feet they were saving about a million and a half a year in energy bill; on the IT side, they were able to simultaneously improve PUE and IT capacity of the data center in a synchronized way. They were able to achieve a much lower PUE, freeing up the equivalent of about 77 out of 300 cabinets of computing resources.

“The only way to achieve that was to see how the two subsystem interacted with each-other,” believed Ikemoto.

“What kind of developments would make tracking rapid change of IT easier and more effective?” asked Koomey.

Stodgill found the question on defining the workload very interesting. “The way we talk about data centers, it often seems like data centers were the place where we disposed of excess electricity,” he said. “Especially in the enterprise space, where workloads are so diverse, it’s difficult to come with a common definition of what workload even is.”

This brings up an important point: the workloads relate to business value, realized Koomey. “What does it do for the company to generate this much computing?”

“There’s a hierarchy from PUE to utilization, to what’s happening into that utilized code,”to the application’s layer. Because we can measure it, we focus on PUE even longer after each incremental dollar’s probably not giving us much return and we can much better spend it higher up the hierarchy,” said Stodgill.

  • Modularity in the facility

“From a physical standpoint, it reduces the  number of degrees of freedom; the less freedom you have, the more able you are to meet your original goals. It’s easier to define the goals for a smaller module, than it is to do for a completely open data center. From a physical standpoint, it’s very helpful but it doesn’t get rid of the problem because you still have to build out a module and, unless you stick to the original plan, you are going to run into capacity issues and shortfalls or cost overrun,” thought Ikemoto.

“The optimization depends on software,” Stogdill jumped in. “One of the provocative things that open compute has to decide is whether or not it’s an open hardware only thing or it embraces being open software as well. Open hardware depends on open software to make any sense – at the layer of provisioning and dispatch and everything else that makes stuff work.”

“The whole stack matters,” agreed Koomey.  “From the perspective of cost-per-unit of compute you need to worry about software and the way users are interacting ”

“How are we going to do things in the future when computing is cheaper that we don’t do now. Stuff that we do now and seem so big will seem so small in size. The things we do with IT in a digital business future they will seem minor,” relayed Stodgill.

  • Business outcome versus operations

“Open Compute is going towards making that connection; in the enterprise/government, in the traditional mixed use data center that connection is broken at the very beginning,” said Ikemoto. “The original plan is set, the money is committed to by the company, but then  the plan is forgotten, everything goes to the operations, cost-per-compute skyrockets and nobody even knows about it.”

“Senior management attention on these issues is almost absent, if not entirely,” said Koomey. “The level of waste here is so large we will see massive shifts in the way enterprise IT is provisioned,” Koomey predicted.

 

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

A proactive approach for a reactive system: Predictive modeling for DCIM

6Sigma_Photo_-_10-13

Can you manage what you can’t measure? Is monitoring your data center really the same as modeling it? Do you make IT changes, then cross your fingers that they’ll work?

There’s an old adage that you can’t manage what you can’t measure. As data center operators flock to DCIM to gather more and more information about their facilities, there’s a danger that some confuse monitoring as modeling.

DCIM is an incredibly valuable tool, providing a wealth of information vital to operation of a modern data center. But it’s not a panacea for the IT deployment process.

The specifics of the deployment process vary between organizations, but most data center operators are stuck in a reactive IT deployment cycle:

  1. Decide – Find a rack where environmental conditions are within acceptable thresholds and that has enough space, the right network connectivity and enough capacity on the breakers to support the equipment.
  2. Deploy – Install the equipment and power it up.
  3. Monitor – Watch the live data from the facility for any problems.
  4. React – If issues appear, be ready to react before they become critical.

dciminstall1

Whilst DCIM tools certainly help with steps 1 and 3, allowing operators to react much quicker to any issues that arrive, they are still left reacting to problems.  In many instances, those problems indicate that the IT deployment should have been made elsewhere.

So, how do you implement new hardware with confidence?  How do you avoid firefighting problems in the future that could have been prevented with better information in the present.  How do you overcome the fundamental limitation that monitor data from sensors can’t tell you what’s going to happen in the future?  

The solution lies in computer modeling. A data center works by well understood physical processes, and its behavior can be mimicked by a computer model.  But get this: once a model has been verified to produce correct results for the current configuration, future configurations can be fed in and their behavior analyzed.

This is called predictive modeling. It allows the impact of a proposed deployment to be understood before a single work order is printed.

Predictive modeling paves the way for a proactive deployment process that exploits all of the good data from DCIM, but puts the operator on the front foot. It doesn’t even require a significant departure from current working practices:

  1. Predict – Predict the impact of the proposed change using the verified computer model.
  2. Decide – Use the results to guide the best deployment location based on operational considerations
  3. Deploy – Install the equipment and power it up.
  4. Monitor – Watch the live data from the facility for any problems.

Monitoring is still an integral part of the process, but it now works together with predictive modeling to give data center operators complete visibility into the state of their data center, now and in the future.

dciminstall2

Supporting case study from CBRE: 

http://dcimnews.wordpress.com/2013/11/21/cbre-white-paper-at-the-end-of-the-day-its-lost-capacity/

 

Contact Robert to learn more: http://www.linkedin.com/in/rfschmidt

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , , , , , , , , , , , , , , , | Leave a comment