Future Facilities Partners with DCIM Solutions to Provide Predictive Simulation and Modeling Services

ACE Jumpstart will optimize data center performance while reducing operational costs

Cabinet View Close

Virtual Facility: The Data Center Simulator

SAN JOSE, Calif.–(BUSINESS WIRE)–Future Facilities North America (Future Facilities NA), a leading provider of data center design and operational management software, today announced that it has partnered with Glassboro, NJ-based DCIM Solutions, LLC to offer ACE, a predictive modeling and simulation service which assesses three critical indicators for optimal data center performance: Availability, Capacity and Efficiency.

“Through this partnership, ACE Jumpstart will be further optimized for the data center owner/operator. It’s gratifying to see the ACE Assessment becoming adopted as an important metric for data center efficiency.”

ACE Jumpstart scores the data center on how compromised its availability, physical capacity and cooling efficiency have become by analyzing and mapping the interrelationship between the three variables. This score determines how well a data center is performing, and in turn, how costly the facility is to build and operate.

The data inputs can also be synchronized with any DCIM suite or other system monitoring toolkit and are mapped to a powerful 3D Computational Fluid Dynamics (CFD) model to create a Virtual Facility (VF), which allows for precise simulations for a variety of critical operational decisions, including: airflow distribution, temperature, physical resource collision, hardware performance, failure scenarios and electrical systems. DCIM Solutions has more than a decade of experience perfecting the calibration process, which is integral to establishing ACE goals and maximizing predictability. Through VF simulations, corrective measures are identified to bridge the gap between the data center’s current state and the aspired ACE Goals.

“Future Facilities is excited to partner with a data center infrastructure leader like DCIM Solutions,” said Sherman Ikemoto, Director, Future Facilities NA. “Through this partnership, ACE Jumpstart will be further optimized for the data center owner/operator. It’s gratifying to see the ACE Assessment becoming adopted as an important metric for data center efficiency.”

The calibrated VF produced by ACE Jumpstart will be imported into Future Facilities’ 6SigmaDC software and be available for immediate use, with a 90 Day Software License and formal training and support included. This will allow data center owner-operators to utilize simulation and predictive modeling throughout the life of their data center to stay on track to reach their ACE Goals.

“This partnership will provide immediate benefits for data center owners and operators that are looking to treat their data centers as a business unit,” said Dan McDougal, Managing Partner, DCIM Solutions LLC. “Using the ACE methodology, DCIM Solutions will be well-equipped to help data centers of all sizes plan for capacity changes and prevent negative trends before they begin.”

ACE Jumpstart benefits include a fully calibrated CFD model, establishment of ACE goals and measurement of the current ACE score, identification of areas for remediation to improve the ACE score, a 90 day license of Future Facilities’ 6SigmaDC software and expert training and support. For those looking for a limited introduction to ACE, DCIM Solutions offers a scaled-down version of Jumpstart.

Future Facilities will be exhibiting ACE Jumpstart with DCIM Solutions at the Fall Data Center World Conference, which will be held between October 19 and 22, 2014 at the Hyatt Regency in Orlando, Florida. Representatives from Future Facilities and DCIM Solutions will be at booth 616 to provide information on ACE Jumpstart, the importance of modeling and predictive simulation, and the benefits of increasing data center efficiency through analysis within the Virtual Facility.

About Future Facilities

For nearly a decade, Future Facilities has provided software and consultancy to the world’s largest data center owner-operators and to leading electronics designers. The company, which is privately funded, optimizes data center utilization through simulation. In doing so, it has saved its customers millions of dollars. Innovative and progressive, Future Facilities is unique in the market place; it is the only company providing scientifically-sound answers to the what-ifs? that have for so long been impossible to answer with real confidence.

About DCIM Solutions, LLC

DCIM Solutions, LLC is the innovative leader for Data Center Infrastructure Optimization Solutions. With a focus on power, cooling, and space utilization, our products and services provide unparalleled optimization and efficiency resulting in cost avoidance, lower operating costs, and better utilization of assets.

Screenshot 2014-10-15 09.59.52

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , , , , , , | Leave a comment

IS PREDICTIVE SIMULATION A CRYSTAL BALL FOR DATA CENTERS?

This blog was posted on behalf of Jeff Brickley, Director, Data Center Services at DCIM Solutions LLC, and is part-1 of a series of blog posts about predictive simulation for the data center. 

Crystal Ball Draft

What is Predictive Modeling?

By definition Predictive modeling is the process by which a model is created or chosen to try to best predict the probability of an outcome. Most often the event one wants to predict is in the future, but predictive modeling can be applied to any type of unknown event, regardless of when it occurred.

Predictive modeling is used in many industries. For example, the Health Care industry uses historical statistics to create models that predict the likelihood a patient will be readmitted to the hospital.   Other industries take this even further through the use of graphical simulations. This is often referred to as Predictive Simulation or Simulation based performance analytics. Gartner defines Simulation-based performance analytics as: Optimization and simulation using analytical tools and models to maximize business process and decision effectiveness by examining alternative outcomes and scenarios, before, during and after process implementation and execution.

The aerospace industry utilizes Predictive Simulation to test new materials to use in the construction of aircraft that potentially could improve fuel efficiency. Automotive manufacturers use Predictive Simulation to test new parts that will help with turn radius and reduce the likelihood of a roll over. Predictive Simulation is very mature and the physics behind the models has been used for centuries to characterize the behavior of complex systems.

Take the automotive example above. They are essentially using Predictive Simulation to improve performance and reduce risk. Sounds like something we all think about in data centers and IT in general. Wouldn’t it be nice to be able to accurately predict the outcome of any proposed change in your data center? That would be like having a crystal ball that can predict the future. Those don’t exist though; or do they? Predictive Simulation is that crystal ball and is being used to predict the outcome of any proposed change. Many of you may have heard of Computational Fluid Dynamics models of data centers. Computational Fluid Dynamics (CFD) is the physics behind the simulation. There are many data centers in the country today that utilize CFD tools for Predictive Simulation. I personally know of data centers in operation today where you can view the Simulation model in an office and obtain an attribute of a data center such as flow from a floor tile or temperature at a certain point on a rack and then walk to that exact spot in the data center and reality is reflecting exactly what the model predicted. These models are kept up to date in order to run simulations throughout operations to ensure any proposed change will not adversely affect the data center as a whole. Additionally, it is being used to justify capital expenses and at the same time reducing operational costs through efficiency gains.

I have started this blog series in an effort to spark conversation around Predictive Simulation for data centers. I will be discussing the need for data center optimization and then dive into a specific methodology for establishing data center performance goals, utilizing predictive simulation to measure existing performance and then illustrating how predictive simulation can be used ongoing to stay on course to reach those goals. It just so happens that through this process there are significant $$ to be saved and at the same time risk of outage is reduced.

I encourage you to research Predictive Modeling and more importantly Predictive Simulation. You should find that these are not new concepts even to the data center industry. However there may be some rumors out there about how accurate these simulations truly are and whether or not they bring value. My first response to such propaganda is that it all depends on how the models are constructed – garbage in equals garbage out. But make no mistake, a properly calibrated CFD model can be used for Predictive Simulation and thus to accurately predict ANY change in the data center.

About the author: 

1b4f5e3Jeff Brickley is a certified Project Manager and has been in the Information Technology sector for over 17 years.  He has a unique and diverse background including programming, infrastructure and project management.  He has lead large, complex efforts and teams with the common theme throughout his career being that of optimizing operations for business.  His passion for continuous improvement and a background in Mathematics made him an ideal candidate to study, utilize and promote the use of Computational Fluid Dynamics modeling for data centers.

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , , , , | Leave a comment

Video: The Virtual Facility – Data Center Sustainability without Compromise

Screenshot 2014-10-06 11.51.36

Data centers are critical operations. The business impact of changes to the data center – like any other business critical operation – needs to be known before making the change. The Virtual Facility is a simulator that predicts the business impact of changes to the data center. This greatly reduces risk and improves IT and capacity planning.

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , , , | Leave a comment

Project Spotlight: Continuous Modeling in Operations – Scoring Data Center Performance with ACE

logo-cfs-2x
 
October 01, 2014   2:30 PM – 3:30 PM Room: 207C 
CEU: 0.1 Audience: Advanced

http://www.criticalfacilitiessummit.com/facilities_education/sessiondetails/Project-Spotlight-Continuous-Modeling-in-Operations-Scoring-Data-Center-Performance-with-ACE–2032 

 

Business requirements are changing continuously which drives change in the data center. The data center is meant to be a flexible, blank slate upon which IT services are quickly built, dismantled and reconfigured continuously to enable business agility. But how agile is your data center? Can you quantify the cost and risk of changing the data center roadmap to accommodate business needs?

Are you able to factor into business decisions the risks from change? This spotlight will explore how a global financial institution and a global distribution company used a predictive approach in their operations to increase efficiency, resilience and to maximize useable data center capacity in their facility. By building and calibrating a Virtual Facility for their data center, they were able to undertake a project that resulted in significant energy savings and an increase in usable capacity.

These case studies illustrate how data center operations were able to meet business objectives through continuous modeling, and highlight a new data center performance/risk score called “ACE” (Availability, Capacity and Efficiency).

Learning Objectives:
1) Quantify the risk and cost of change in the data center
2) Consolidate tracking of three interrelated performance metrics that together capture the very purpose of the data center
3) Learn how ACE calculations are made practical by computer modeling of the physical data center
4) Address limitations of popular best practices and why these alone cannot address the underlying causes of availability and capacity utilization problems

Presented by:


Sherman Ikemoto
Director
Future Facilities Inc.

Screenshot 2014-08-12 12.25.30

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , , , , , , , , | Leave a comment

Predictive Modeling and Simulation for the DataCenter Lifecycle

Watch on September 17th 9am PST – 12pm EST: https://www.brighttalk.com/r/sVJ

Screenshot 2014-08-19 09.29.32

Speaker: Sherman Ikemoto, Director, Future Facilities

Data centers are a key component of modern companies. Senior management in these enterprises assume that future IT related changes demanded by the business can be accommodated within their data center infrastructures.

Alas, this demand for operational flexibility introduces risks (and costs) into the data center itself and hence to the business as a whole. Unfortunately, most companies don’t systematically assess such risks inside their data centers, nor can best practices and rules of thumb adequately address them.

This presentation is about computer modeling – simulation of the data center to analyze and quantify the risks of operational flexibility within data centers, with the goal of moving IT operations from being a cost center to becoming a cost-reducing profit center. The approach will be illustrated with a two year case study from a global financial institution.

Watch on September 17th 9am PST – 12pm EST: https://www.brighttalk.com/r/sVJ

Screenshot 2014-07-23 11.18.15

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Empowering the DataCenter Operator – ACE Performance Assessment

Download Here: http://www.futurefacilities.com/media/info.php?id=292

Executive Summary by Dave King and Steve Davies

In this third and final white paper in our ACE series, we demonstrate to the data center operator how they can use the Virtual Facility to make the decisions that affect them:can a planned change be made without adversely affecting uptime or resilience? If it does affect it, to what degree does it do so? Recalling our own experiences advising data center operators over the last decade, this paper will show you how predictive modeling using the VF will empower you to make decisions with confidence.

Introduction

At a high level, the data center is a trade-off between three intertwined variables: availability of IT, physical capacity and cooling efficiency (ACE).

In our previous papers, Five Reasons your Data Center’s Availability, Capacity and Efficiency are being Compromised, and From Compromised to Optimized: An ACE Performance Assessment Case Study, we established that mismanaging these variables causes costs to escalate. We also proposed a method, predictive modeling, of sustainably managing ACE in order to reach the goals of the business.

This focus on the operational flexibility and high-level goals of the business is all very nice for the most senior levels of an organization, but what does it mean for you, the operations team? After all, you’re the people who are the ‘boots on the ground’; the people tasked with the actual day-to-day running of the data center.

You will no doubt have first-hand experience of tools and methodologies that have been prescribed from above in the mistaken belief that they will improve efficiency, or prevent downtime, or help you manage capacity. In our experience, the majority of these actually make your job harder to do, so they eventually get left by the wayside.

Making Your Life Easier

So, how is our proposal any different? What we hope to show you in this paper is that predictively modeling using the 6SigmaDC suite’s Virtual Facility (VF) will not only fit into your day-to-day process seamlessly, but also make your job easier and your life less stressful.

How many times have you approved a change that fit within the design rules, only to receive a call telling you that IT service has been interrupted and that you have to fix it, right now? Probably enough for you to think that it is just part of the job; it’s something that comes with the territory, right? It does not have to be. It is simply a knock-on effect of the fact that instead of managing data center availability, capacity and efficiency as three interconnected variables, your organization is treating them as three separate silos. In fact, they probably haven’t been looked at together since the original design was created at the start of the facility’s life.

Given the fast pace of change within any organization, the chances are fairly small that the IT plans put together by the design consultant bear any resemblance to the equipment that is actually installed in your facility today – there is a colossal disparity between the design and the reality. Add to this IT disparity the various energy efficiency drives which will have changed the infrastructure from the original design, and you are left trying to fit square pegs into round holes.

Adding more environmental monitoring will have helped choose which holes to avoid and will have reduced the number of critical events, but firefighting is still a large part of your job. A large number of those fires could be avoided if only you were provided with the right information. This is precisely what predictive modeling does.

What we intend to show in this paper, through the use of examples based on
our decades’ worth of experience in the data center industry, is how predictive modeling is an essential tool in your fight against downtime. We will demonstrate how the data from a VF can provide you with crucial information that is simply not available using any other method. Finally, we will show you how our new ACE Data Center Performance Score provides a simple way to analyze, compare and communicate the effect different options have on a very complex system.

Download Here: http://www.futurefacilities.com/media/info.php?id=292

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , , , , , , , , , | Leave a comment

The Calibrated Data Center

 Compass-Logo-w97Crosby

It seems like a day doesn’t go by where I don’t read something about Software Defined Data Centers (SDDC). While nobody seems to have settled on an actual definition of what a true SDDC is supposed to do, the overall concept seems to have everybody excited. While I don’t dispute that SDDC seem to be a logical path for the industry to take, I don’t see many articles quoting any real sales figures which leads me to believe that many data center operators are taking a “you go first” approach to adoption. This makes sense, since solutions advertised as “all encompassing”, tend to be somewhat confusing when a potential customer just wants to know which server is running the company’s email. While we are all waiting for the Rosetta Stone of SDDC, there are software applications available today that can provide real value in the areas of data center calibration and capacity planning.

micrometer

Calibrating your data center is a proactive process that enables data center operators to fine tune their facilities and identify potential operational issues at the component level. A service provider, for example, could use this process to maximize the sellable capacity of their facility or to provide actionable criteria within the customer SLAs. This process requires both CFD and component level modeling tools. In recent years multiple vendors have arisen to provide this functionality. Here at Compass we use Future Facilities’ 6SigmaDC product for the CFD modeling component and Romonet’s system modeling tool for the TCO component and system level analytics.

Calibrating a data center is required due to the fact that no two data centers operate exactly alike (except, of course, in our case). The calibration process provides data center operators with the specific benchmarks for their facility that can then be used to determine the impact of operational actions like the moving or adding equipment on the raised floor will have on overall site performance. The calibration process begins during the design process for the facility by evaluating the performance on multiple floor layout scenarios. The adoption of the final layout model then provides the initial benchmark standards that will be used in calibrating the facility. The calibration effort consists of comparing these initial benchmarks to the site’s actual performance during a series of progressive load tests conducted upon the completion of the facility’s Level 5 commissioning.

The completion of the site’s commissioning efforts is important since it eliminates an assortment of extraneous variables that could affect the final values reported during the load testing. During load testing the site’s performance in a number of areas including cooling path considerations like the airflow from AHU fans to floor grills or from the grills to cabinets is documented and compared to the initial modeled values to determine if there are any variances and whether those deviations are acceptable or require corrective action. The conclusion of this process results in the establishment of the performance metrics that apply to that data center specifically.

Certainly the establishment of performance benchmarks for the data center is a valuable exercise from a knowledge perspective, but the real value of the calibration effort is resulting ability for operators to continuously model the impact of future site modifications on its performance. The continuous modeling capability manifests itself in more effective capacity planning. The ability to proactively analyze the impact of site modifications like cabinet layouts, increasing power density or hot aisle/cold aisle configurations enables important questions to be answered (and costs avoided) by determining the most effective mode for their implementation prior to the initiation of the first physical action.

Aside from the practical value of the ability to use currently available software tools to perform calibration and continuous modeling activities, they can also provide operators with the ability to prepare for a software-defined future. Developing an on-going understanding of operationally effecting actions provides a foundation of knowledge that can pave the way for the more effective implementation of a “comprehensive software solution” in the future.

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , , , , , , , , , , , , , | Leave a comment