Can Your Data Center Handle the Next Big Idea?

By: Dave Johnson, Executive Vice President-IT Divison, Schneider Electric

Once upon a time, becoming a billion-dollar business was a unique feat. IBM, a global brand formed in 1911, took over 50 years until 1957 to reach over a billion dollars in revenue. By then it had survived two world wars, pioneered computing and artificial intelligence, and was growing in leaps and bounds as it emerged as the world’s leading technology company.

The internet changed that sort of timescale, setting a new level of expectation altogether. As business transformed and globalized, technology has provided the platform on which the stuff of dreams has been built by imaginative entrepreneurs.

Data Center

Today, when market capitalization and valuation are perhaps more prestigious than revenue, these companies offer a range of services from ride sharing, online shopping and mobile messaging to fantasy sports and recipes with ingredients to your door. Each has its genesis in the last six years and has already been valued at more than one billion dollars. In fact, holds the speed record for a company to reach one billion dollars in valuation, achieving the milestone within just four months of being founded.

I can list over 100 other companies that also reached the one billion dollars mark within six years of their big idea. Interestingly, behind the scenes, it is the power, capacity and reach of new digital services that are not only helping to make this all possible, but also making it easier than ever before. Ideas can come in a flash of inspiration but in this connected era, they are all dependent on the data center.

Since the advent of the data center age, different approaches have emerged to the way that we power, cool and protect IT and storage equipment. The dotcom crash taught that building for future growth had its risks, as the economies of scale on offer were quickly eroded by operational expenses associated with low load vs capacity. In response, the modular approach was born with the introduction of a standardized and scalable, pay-as-you-grow solution for unpredictable data center capacity.

The modern data center has its roots in the 1950’s and for many years they were built using disparate building blocks and a great deal of customer engineering. But the big idea belonged to Schneider Electric, transforming the market when it introduced InfraStruxure as the world’s first fully integrated solution for data center physical infrastructure.

That was over 10 years ago. Today, Schneider has introduced another big idea with the introduction of EcoStruxure  for data centers – an IoT enabled open and interoperable system architecture and platform, designed to enable data centers to scale faster, and be more efficient, resilient and easier to manage.

With the fast pace of digital business and innovation that we are seeing today, it could be that we are actually underestimating where we could be in the next couple of years. Maybe flowers will be delivered by flying drones, and there’ll be dancing in the streets with VR.

Lessons Learned

While managing facility operations for large data centers certainly takes specialized skills in a range of disciplines. The more you do it, the better you get at it. Most of the lessons we’ve learned fall into one of five general categories: competency; standardization; risk management; tracking and reporting; operation and maintenance costs.

In terms of competency, the main issue is that most companies have expertise that lies in areas other than managing data centers. That’s as it should be. If you’re in, say, retail, healthcare or manufacturing, your expertise lies in those areas; the data center is merely a supporting function.

But it’s an issue if you want to run the data center using internal employees, because you don’t have a large workforce to pull from. As a result, we routinely see companies with data center infrastructure management (DCIM) and other tools installed, but they’re not using them to their full extent – because they simply don’t have the appropriate expertise.

With respect to standardization, companies tend to run into trouble after mergers and acquisitions, or if they experience rapid growth. They wind up with a series of data centers, with no common set of standards in terms of how to operate them.

No matter if you’ve got two data centers or 20, you need to share learnings among all of them. Schneider Electric’s standards and procedures are best in class in part because we are diligent about sharing what we learn in operating each one of the 100 or so that we operate. We use those learnings to continually update our processes and procedures so when a problem occurs, we have sound emergency procedures in place to follow. They should include back-out procedures to follow in the event something unexpected happens after a data center change – to prevent the issue from getting worse.

Such procedures are closely related to the risk management topic. One of the big lessons here is to have a full-system approach to data center management. If you need to take a component out of service to perform maintenance, for example, you need to first understand the impact and dependencies of that component with respect to the rest of the data center.

Doing so requires a thorough understanding of the data center. For any data center we manage, Schneider Electric likes to get in on the construction phase, or as close to it as possible. That way we can gain a thorough understanding of the architectural drawings, piping, wiring and so forth – all of which is knowledge that helps mitigate the risk that goes into operating a data center.

Tracking and reporting is an area that gets overlooked far too often, leading to wasted operational costs. With proper tracking and reporting, you should be able to identify stranded IT capacity – that old rack of servers over in the corner, for example, that nobody is really sure still serves a purpose. Reclaiming that capacity can help you stave off a data center expansion by getting more out of the space you’ve already got.

We’ve learned plenty of lessons in how to keep operation and maintenance costs down, like using condition-based and predictive maintenance to replace components only when they really need it, as opposed to when some schedule says they do.  And if you effectively track your assets, then you can start determining which ones require the most maintenance – and potentially save money by replacing them.

Leave a Reply

Your email address will not be published. Required fields are marked *