The Calibrated Data Center

 Compass-Logo-w97Crosby

It seems like a day doesn’t go by where I don’t read something about Software Defined Data Centers (SDDC). While nobody seems to have settled on an actual definition of what a true SDDC is supposed to do, the overall concept seems to have everybody excited. While I don’t dispute that SDDC seem to be a logical path for the industry to take, I don’t see many articles quoting any real sales figures which leads me to believe that many data center operators are taking a “you go first” approach to adoption. This makes sense, since solutions advertised as “all encompassing”, tend to be somewhat confusing when a potential customer just wants to know which server is running the company’s email. While we are all waiting for the Rosetta Stone of SDDC, there are software applications available today that can provide real value in the areas of data center calibration and capacity planning.

micrometer

Calibrating your data center is a proactive process that enables data center operators to fine tune their facilities and identify potential operational issues at the component level. A service provider, for example, could use this process to maximize the sellable capacity of their facility or to provide actionable criteria within the customer SLAs. This process requires both CFD and component level modeling tools. In recent years multiple vendors have arisen to provide this functionality. Here at Compass we use Future Facilities’ 6SigmaDC product for the CFD modeling component and Romonet’s system modeling tool for the TCO component and system level analytics.

Calibrating a data center is required due to the fact that no two data centers operate exactly alike (except, of course, in our case). The calibration process provides data center operators with the specific benchmarks for their facility that can then be used to determine the impact of operational actions like the moving or adding equipment on the raised floor will have on overall site performance. The calibration process begins during the design process for the facility by evaluating the performance on multiple floor layout scenarios. The adoption of the final layout model then provides the initial benchmark standards that will be used in calibrating the facility. The calibration effort consists of comparing these initial benchmarks to the site’s actual performance during a series of progressive load tests conducted upon the completion of the facility’s Level 5 commissioning.

The completion of the site’s commissioning efforts is important since it eliminates an assortment of extraneous variables that could affect the final values reported during the load testing. During load testing the site’s performance in a number of areas including cooling path considerations like the airflow from AHU fans to floor grills or from the grills to cabinets is documented and compared to the initial modeled values to determine if there are any variances and whether those deviations are acceptable or require corrective action. The conclusion of this process results in the establishment of the performance metrics that apply to that data center specifically.

Certainly the establishment of performance benchmarks for the data center is a valuable exercise from a knowledge perspective, but the real value of the calibration effort is resulting ability for operators to continuously model the impact of future site modifications on its performance. The continuous modeling capability manifests itself in more effective capacity planning. The ability to proactively analyze the impact of site modifications like cabinet layouts, increasing power density or hot aisle/cold aisle configurations enables important questions to be answered (and costs avoided) by determining the most effective mode for their implementation prior to the initiation of the first physical action.

Aside from the practical value of the ability to use currently available software tools to perform calibration and continuous modeling activities, they can also provide operators with the ability to prepare for a software-defined future. Developing an on-going understanding of operationally effecting actions provides a foundation of knowledge that can pave the way for the more effective implementation of a “comprehensive software solution” in the future.

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , , , , , , , , , , , , , | Leave a comment

How Predictive Modeling can work with DCIM to reduce risk in the Datacenter

Modeling&Monitoring

by Dave King

The human race has acquired an insatiable demand for IT services (or rather the 35% that have access to the internet has), services that have to be available 24 hours a day, seven days a week.

As this demand has grown, data centers have evolved to become either the place where all revenue is generated, or the place that enables all revenue generation for a business.  Just as I am writing this, an advert has popped up on LinkedIn for a Network Infrastucture Engineer for Greggs the Baker (for our international readers, Greggs are a high street baker; they sell cakes, pasties and various other tasty things). That’s right, the baker needs to employ someone well versed in Linux, Cisco and Juniper!

Back in the old days, operators could fly by the seat of their pants, using gut instincts and experience to keep things running.  A little bit of downtime here and there wasn’t the catastrophic, career-ending event it is today. But, as the data center has undergone its transformation into the beating heart of the digital business, the pressure on those poor souls looking after the data center environment to keep it up 24/7 has gone through the roof.

In response to this, managers have invested heavily in monitoring systems to understand just what the heck is going on inside these rooms.  Now armed with a vast amount of data about their data center (interesting question: how many data centers’ worth of data does data center monitoring generate?), and some way to digest it, people are starting to breathe a little easier.

But there’s still a nervous air  hanging over many operations rooms. Like the bomb disposal expert who is fairly sure it’s the green wire, but who is still going to need a new pair of underwear, people are left watching those monitor traces after any change in the data center, hoping they don’t go north.

FutureTemps

Meet Bob. Bob works in data center operations for MegaEnterpriseCorp Ltd. It’s his job is to approve server MACs (moves, adds, changes), and he is judged on two criteria:

  1. No increase in the PUE value for the facility
  2. No loss of availability under any conditions, barring complete power failure.

The boss also dictates that unless Bob can prove that a MAC will fail either criteria, as long as there is capacity in the facility, it must be approved.

If a MAC fails on 1 or 2, or if Bob says no to his boss, he risks a new pink slip.  Bob has at his disposal the most comprehensive DCIM monitoring solution you can imagine.  What would you do in this situation?

Let’s think about this for a minute. Say that the equipment to be installed had a fan more like a jet engine than a server; Bob has a gut feeling that it’s going to cause all sorts of problems. How could he prove that it would fail either criterion? Thanks to his all-singing-all-dancing DCIM stack, he has all the information he could want about the environment inside the data center right now. It’s saying that that all looks fine, mostly because the horrible jet server hasn’t been installed yet.

The only way to find out what kind of carnage that server may wreak on the environment is to install it, switch it on and watch the monitor traces in trepidation to see what happens.  If the PUE doesn’t change then great, but how much headroom have you lost in terms of resilience? The only way to find out? Fail the cooling units and see what happens…

Example2Fail_Small

The more astute among you will have noticed that this is a lose-lose situation for poor old Bob.  He can’t stop any deployments unless he can prove they will reduce availability or have an impact on PUE, but he can’t prove they will cause problems without making the change and see what happens! Catch-22!

The problem is that all the changes are being made to the production environment; there is no testing ground data center to make mistakes in – it’s all happening live!  And that’s why everyone is on the edge of their seat, all the time.  In many other industries, simulation is used in situations like this – where physical testing is impossible or impractical – to allow people to see and analyze designs changes and what-if scenarios.  There is no reason the data center industry should be different.

FlowDiagram2

Let’s go back to Bob, but this time we’ll give him a simulation tool in addition to his DCIM suite. For each proposed MAC, he sets up a model in the simulation tool using the data from the DCIM system and then looks at the simulated PUE and availability. He can fail cooling units in the simulation without any risk to either the IT or himself.  If PUE goes up, or availability goes down, Bob can print out the simulation results as proof, say no to his boss and keep his job.

As a senior consultant engineer who has been parachuted into troubled data centers the world over, and who has had the opportunity to advise lots of Bobs over the years, it still amazes me that the uptake of the obvious solution is not more widely spread. The case for simulation is compelling, so why has the adoption of simulation in the data centre industry been so slow? A lack of awareness is certainly a factor, but it has been seen by many as unnecessary, too complicated and inaccurate.  Let’s address these points…

While the benefits of simulation have always been there to be had, it is certainly true that in the past there was an argument for placing it on the “nice to have” pile. Thermal densities were much lower and over-engineering more acceptable. But, as data centre operations have been forced by business to become leaner, the operational envelope is being squeezed as tightly as possible. The margin for error is all but disappearing, and having the ability to test a variety of scenarios without risking the production environment places organizations at a big advantage.

Simulation tools can be complicated and it would be wrong to say otherwise. But this complexity was an unfortunate consequence of the deliberate intention to make these tools versatile. Here at Future Facilities, we’ve spent 10 years doing the exact opposite: making a simulation tool that is focused on a single application: data centers. This tool is aimed at busy data centre professionals, not PhD students who have hours to spend fiddling with a million different settings.  This means that modelling a data center is now as simple as dragging and dropping cooling units, racks and servers from a library of these ready-to-use items. Take a free trial and have a go yourself!

That just leaves us with the question of accuracy.  The accuracy of CFD technology has already been proven – the real problem comes down to the quality of the models themselves. Make a rubbish model and you’ll get meaningless results.  Many in the data centre industry have been burned in the past by simulation used badly, but this is a ‘people problem’ – operator error – not an issue with the technology!  If you’re going to use simulation, the model has to represent the reality and must be proven to do so before it’s used to make operational changes. This process of calibrating the model ensures that agreement between simulation results and physical measurements is reached (read this paper to find out how the calibration process works).  If someone is selling you simulation and isn’t willing to put their money where their mouth is, be very, very wary.

There’s just room here for me to say a few words on “real-time CFD” or ‘CFD-like’ capabilities – the latest strap-lines for a number of DCIM providers. We’ll blog about this separately in the future, but let us be very clear: there is, at present, no such thing for data centers. It is marketing hype.  When people talk about real-time CFD they can really mean one of two things: 1) they can either use monitor data to make a picture that looks like the output of a simulation, with zero predictive capability, or 2) they use a type of CFD known as potential flow which trades accuracy for speed by making a lot of assumptions.  Renowned physicist, bongo player and all round good guy Richard Feynmann considered potential flow to be so unphysical that the only fluid to obey the assumptions was “dry water“.

So the questions you have to ask yourself is do I want a tool that can actually predict, and do I want a tool that can predict accurately. A full CFD simulation (typically RANS) may not be real-time, but it is the only way to get the real answer!

Slide4T

 

 

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , , , , , , , , , , , , , , | Leave a comment

DOES THE DATACENTER INDUSTRY NEED A CAPACITY GOD?

 

logo

Published on 18th June 2014 by Penny Jones

DOES THE DATACENTER INDUSTRY NEED A CAPACITY GOD?

????????????????????????????????????????

The divide between facilities and IT teams within the data center created some lively debate this week at DatacenterDynamics Converged Santa Clara. This time the conversation was around unused capacity, cost and risk. Judging by the thoughts of those working here on the US West Coast, the overall responsibility for managing these areas is a real ‘hot potato’ that is turning efforts to drive efficiency and reduce costs to mash.

But it appears to be the fault of no single team or technology. What it really boils down to (not intending to put another potato pun out there!) is a lack of education, or even an ensuing candidate position to assume such a role. It seems IT teams have enough on their plate to start learning facilities, and facilities the same regarding IT. And finance, well they often have other parts of the business to think about, despite paying the power bill. But when things go wrong, this hot potato can cause havoc for all teams involved.

On the evening leading to the event, a roundtable organized by predictive modeling vendor Future Facilities, hosted by industry advisor Bruce Taylor and attended by a number of industry stalwarts and a handful of newer industry members, discussed hindrances to capacity planning. Most agreed that the main reason we have stranded capacity in the data center is that the industry has created so many silos – teams working on individual projects inside the facility – that there is rarely someone tasked with taking on the bigger picture, looking at the farm from the top of the mountain.

taylor

Air flow is complicated, and Future Facilities argues that predictive modeling is the only science that can really help when deploying, then maintaining levels of efficiency as data center demands – and equipment – change.

koomey

Dr. Jon Koomey, research fellow at the Steyer-Taylor Center for Energy Policy and Finance at Stanford University said only when you know the physics of the situation inside the data center, and the effect of changes you are likely to make in future, can you remove the problem of stranded capacity, and in turn drive better levels of efficiency through reduced power use.

“The goal, ultimately, is to match energy services demanded with those supplied to deliver information services at the total lowest cost. The only way to do that is to manage stranded capacity that comes from IT deployments that do not match the original design of the facility,” Koomey said.

He likened the situation today to Tetris, drawing on the analogy of the different shaped blocks in the game.

“IT loads come in to the facility in all different shapes, leaving spaces. Those spaces are capacity, so that 5MW IT facility you think you have bought will typically have 30% to 40% unused.”

Despite the obvious draw for making maximum use of your data center many attendees agreed that predictive modeling, and even data center infrastructure management (DCIM) tools that offer more clarity on the individual situation at real time, can be a difficult sell. Once again, the hot potato (of no one tasked with complete responsibility) often gets in the way.

Markthiele

Mark Thiele, EVP of data center technology at Switch, who has also worked for ServiceMesh, VMware and Brocade, said in most cases there is not a single person in the data center with a vision or understanding of the facility’s entire operations – from design and build to IT, facilities and even economics.

“Today 75 to 80% of all data centers don’t have a holistic person that knows and understands everything about the data center, so the target opportunity for [sale of] these tools is often someone that has no responsibility for managing this in their job description,” Thiele said.

“We also find that a majority of facilities today are still bespoke – they are designed to be repaired after they are created. These are serious thresholds that have to be overcome in the market on the whole.”

But this is a situation the industry has created for itself, according to dinner host and Future Facilities CEO Hassan Moezzi.

“If you go back to IBM, 40 years ago it dominated the mainframe market. At the time, the concept of IBM having blank cheque for customers was a really painful thing but everyone accepted that because it was the only way. IBM built the power, cooled the data center and provided the hardware and software and if anything went wrong with the data center it was all put back on to IBM,” Moezzi said.

Today we have the silos and distributed systems we have asked for. Anyone can buy a computer and plug it into a wall. The shackles have gone, and so too has that one throat to choke – or to sell capacity planning systems to.

Continue Reading this article here: http://www.datacenterdynamics.com/blogs/penny-jones/does-data-center-industry-need-capacity-god 

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Future Facilities Hosts Executive Dinner with Jonathan Koomey and Bruce Taylor

 

 

An Evening with Dr. Jonathan Koomey & Bruce Taylor

FF model data center logo large

On June 16, 2014, in Santa Clara, CA, Future Facilities hosted top executives from IT companies in the Bay Area to join together in conversation facilitated by Dr. Jonathan Koomey of Stanford University and Bruce Taylor of Data Center Dynamics. A unique & intimate evening of networking, dinner, & drinks, this event featured a lively conversation on some controversial views on risk in the data center and its impact on the business. Attending industry experts and analysts discussed ways to use computer modeling to analyze and quantify risks and costs of operational flexibility within data centers, with the goal of moving enterprise IT operations from being a cost center to becoming a cost-reducing profit center.

MEMO0071MEMO0079

MEMO0077

MEMO0084

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , , , , | Leave a comment

451 Research Reports on the Importance of Future Facilities’ ACE Metric for Data Centers

451research  Future_Facilities_Logo

Leading analyst firm explores the impact of Future Facilities’ measurement for the Availability, Capacity, and Efficiency of data center operations

SAN JOSE, Calif.–(BUSINESS WIRE)–Future Facilities, a leading provider of data center design and operations management software, today announced the release of a report by 451 Research on the ACE (Availability, Capacity, Efficiency) score and how this metric can be used to assess the performance and quantify the risk of design and operational decisions.

“We are delighted to see ACE featured by 451 Research”

In the report titled “Future Facilities says ‘ACE it’ to assess the real impact of datacenter changes,”  Andy Lawrence, Research Vice President for Datacenter Technologies (DCT) & Eco-Efficient IT at 451 Research, discusses the ACE score, developed by Future Facilities. ACE was created to assess and visualize the three critical indicators of a data center’s performance. The report by 451 Research explores the place of ACE among other industry metrics, like PUE, examines the merit of ACE at the business level for facilities, and discusses how the score allows both technical and business managers to understand the effect of different decisions.

According to the report, “The key to ACE is that it enables managers to see how their data centers are scored against three important parameters (which are often competing) in terms of design goals or impact: availability, capacity and efficiency… its adoption may encourage wider use of models to simulate and improve datacenter performance.”

“ACE can help managers gain relatively quick and accurate insight into the status and performance of their datacenters,” said Andy Lawrence, Analyst at 451 Research. “ACE, as now outlined by Future Facilities, could be used in a very practical way to make data center decisions.”

“We are delighted to see ACE featured by 451 Research,” said Sherman Ikemoto, Director, Future Facilities North America. “The data center is meant to be a flexible IT platform to support the business. But, operational flexibility comes at a cost. For the first time, operators can quantify this cost with ACE, and do so predictively. ACE helps operators avoid mistakes and enable better business level decision making. Before ACE, operators were in the dark about the risks being incurred within their data centers.”

To read the full 451 Research report, please visit:
https://451research.com/report-short?entityId=81230

To learn more about ACE, please visit:
http://www.futurefacilities.com/solutions/ace/ace_assessment.php

About Future Facilities

For nearly a decade, Future Facilities has provided software and consultancy to the world’s largest data center owner-operators and to leading electronics designers. The company, which is privately funded, optimizes data center utilization through continuous modeling. In doing so, it has saved its customers millions of dollars. Innovative and progressive, Future Facilities is today unique in the market place; it is the only company providing scientifically-sound answers to the what-ifs? that have for so long been impossible to answer with real confidence.

Additional information can be found at http://www.futurefacilities.com.

About 451 Research

451 Research, a division of The 451 Group, is focused on the business of enterprise IT innovation. The company’s analysts provide critical and timely insight into the competitive dynamics of innovation in emerging technology segments. Business value is delivered via daily concise and insightful published research, periodic deeper-dive reports, data tools, market-sizing research, analyst advisory, and conferences and events. Clients of the company – at vendor, investor, service-provider and end-user organizations – rely on 451 Research’s insight to support both strategic and tactical decision-making. The 451 Group is headquartered in New York, with offices in key locations, Boston, San Francisco, Washington DC, London, Seattle, Denver, Sao Paulo, Dubai, Singapore and Moscow.

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Future Facilities Newsletter

Connect with us Like us on Facebook   Follow us on Twitter   View our profile on LinkedIn

In This Issue
451 Research Group Paper
Is Your Data Center Compromised?
ACE Case Study
What is a Valid Data Center Model?
Upcoming Webcasts
June 17, 2014: Mission Critical Magazine – Thermal Efficiency Results in Greener Data Center Containment
June 25, 2014: BrightTALK – ACE: A new approach to scoring data center performance
July 8, 2014: Battle of the Brothers – 6SigmaRoom vs 6SigmaRoomLite
July 10, 2014: ACE – A new approach to scoring data center performance
Upcoming Events
June 12, 2014: DCD Madrid
June 17, 2014: Data Center Dynamics San Francisco Bay Area – Santa Clara, CA
June 28 – July 2, 2014: ASHRAE Annual Conference, Seattle, WA
July 8, 2014: DC Transformation Conference, Manchester University
Quick Links
Future Facilities
Design / Troubleshooting
ACE Assessments
Operational Management
Hardware Design
 What We Do Best
Optimize Data Center Utilization through Continuous ModellingTo have a comprehensive DCIM solution, you must integrate a continuous modeling process with your monitoring, asset management, etc.

Adopt a sustainable approach without compromise to operation and reliability.

 

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

Datacenter Case Study – Minimizing lost IT loading capacity through the virtual facility approach

Mission Critical Logo

by Sherman Ikemoto

The initial capital cost of a data center facility runs anywhere from $10 to $30 million per megawatt of IT capacity. Despite these high costs, the average data center strands between 25% to 40% of its IT loading capacity through inefficient data-center equipment layout and management. As such, substantial financial losses are routinely incurred through lost IT loading capacity. Put another way: in practice, four typical data centers are needed to provide what could be provided by three data centers with more optimal design.

This case study shows how a virtual facility approach that leverages proactive mathematical simulation of data-center thermodynamic properties can minimize lost capacity far more effectively than a traditional iterative, multi-stage data center deployment methodology. This paper’s economic analysis shows the extent to which the virtual facility approach allows data center operators to realize original data-center operational intent and provide for full intended lifespan of the data center.

Background

Full utilization of IT loading capacity requires an understanding of the interrelationships between a data-center’s physical layout and its power and cooling systems. For example, a lack of physical space might cause servers to be positioned in an arrangement that is non-optimal from a cooling perspective. Indeed, cooling is the most challenging element to control, as it is the least visible and least intuitive. Complicating matters, airflow management “best practices” are simply rules of thumb that can have unintended consequences if not applied holistically. The virtual facility approach provides the basis for this holistic methodology when modifying the data center.

The Virtual Facility approach is a Predictive Modeling solution that utilizes a three-dimensional, computer model of the physical data center as the central database and communication platform. The Virtual Facility integrates space, power and cooling management by combining 3D modeling, computational fluid dynamics (CFD) analysis and power system modeling in a single platform. These systems simulate data center performance and provide 3-D visualizations of IT resilience and data center loading capacity. 6SigmaFM is the Predictive Modeling product from Future Facilities.

Methodology

In this case study, the data center of a large American insurance company is analyzed to compare the relative merits of using a straightforward CFD analysis system to isolate and correct thermal issues as they occur vs. using a comprehensive predictive DCIM solution to predict and avoid thermal problems before IT deployments and modifications are made.

Three successive IT deployment modifications are discussed in the context of the insurance provider’s data center. Methodologies for leveraging the predictive power of the 6SigmaDC suite of software to maximize IT loading capacity and optimize efficiency are discussed.

The Data Center

The data center considered here is an 8,400 sqft facility with a maximum electrical capacity of 700 kW available for IT equipment and 840 kW available for cooling with N+1 redundancy. 4kW cabinets are arranged in a hot-aisle/cold-aisle (HACA) configuration. Cooling comes from direct expansion (DX) units with a minimum supply temperature of 52°F. The data center’s IT equipment generally has a maximum inlet temperature of 90°F.

Iterative Approach

This data center was rolled out in three distinct deployments. All three deployments were planned at once, and CFD analysis was used only for the initial planning. The intent of deployments was to maximize the data-center IT load and minimize energy consumption. According to initial plans, the data center lifespan was to be ten years – in other words, the data center facility would be sufficient to supply the anticipated growth in demand for computational services for a decade.

First deployment. Upon the first deployment of IT equipment in the facility, some parameters met expectations. The facility was at only 25% of maximum heat load, and no rack consumed more than 2.59 kW of electricity (far below the recommended 4kW limit). Despite this, equipment was at risk because certain network switches were ejecting waste heat sideways, increasing rack inlet temperature (RIT) to borderline levels.

To mitigate the problem, equipment placement within the racks was staggered to lower RIT. Keeping the equipment within the same racks reduced server and switch downtime, but the reduced equipment density caused by the non-optimal placement represented a deviation from operational intent, resulting in stranded capacity.

Second deployment. During the second wave of IT equipment additions, various servers and switches were introduced (again, in a staggered configuration) and the heat load of the data center increased to 45% of capacity — within the designers’ expectations. However, contrary to expectations, equipment at the top of many racks was overheating due to waste heat recirculation. The recirculation problems resulted from a failure to consider not only equipment placement, but also equipment power density. Moving certain devices to the bottom of each rack mitigated the problem, but as with the first deployment, more capacity was stranded.

Third deployment. During the final IT deployment, the IT load in kW reached 65% of capacity. Afterward, internal circulation is observed among racks two rows apart, and blanking panels are suggested as a possible fix. Counter-intuitively, however, the blanking panels actually increase RIT and they are removed. As a result, no additional equipment can be safely added, and roughly a third of the data center’s capacity is left unusable.

Virtual Facility Approach

Some of the problems encountered in a trial-and-error approach to data-center design can be avoided through the virtual facility approach. The virtual facility approach involves leveraging a continuous 3-D mathematical representation of the data-center’s thermal properties to anticipate future problems before they happen.

As stated above, after the third deployment of IT equipment, the data center was operating at 65% of maximum heat load. 354 kW was being removed by the air conditioners (ACU) with total capacity of 528 kW in an N+1 configuration. Of the 354 kW of heat load, 286 kW came specifically from IT equipment. The highest cabinet load was 7.145 kW, and the highest inlet temperature was 86.5°F. Cooling airflow stood at 86,615 cubic-feet per minute (cfm), and airflow through the grills was 79.5% of maximum.

6SigmaRoom was used to calibrate the initial CFD model to reflect the actual airflow patterns. CFD analysis from 6SigmaRoom indicated that strategic placement of blanking plates would decrease inlet temperatures sufficiently to allow an increase of heat load to 92% of capacity without risk of equipment damage. The end result is a potential increase in the useful life of the data center.

Conclusion

Because airflow patterns become less predictable as the IT configuration builds over time, it is recommended that CFD be used continuously and as a component of IT operations, and not just during the initial design phase of a data center.

The virtual facility approach bridges the gap between IT and facilities so that personnel who are experts with servers and switches can communicate with personnel who are experts in HVAC and electrical systems. Systems like 6SigmaRoom validate mechanical layout and IT configuration, while 6SigmaFM provides predictive DCIM, capacity protection, and workflow management. Finally, data center predictive CFD modeling capabilities included in the 6SigmaDC suite has the power to detect potential problems relating from poor airflow management, floor-tile layout, and equipment load distribution.

Posted in Data Center DCIM Datacenter Datacenters Datacentre | Tagged , , , , , , , , , , , , , , , , , , , , | Leave a comment