Douglas Rademacher, CIO of American Power Conversion (APC), is presently
engaged with compliance and regulatory issues. But he still has time to make
sure he can deliver the benefits of the business at a lower cost through
standardizing applications, web servers and other physical infrastructure. This
harbinger of a new datacenter infrastructure product for APC called
InfraStruXure speaks to Goutam Das of Dataquest.
Are IT organizations rethinking on basic datacenter design? If yes, what
has necessitated this?
Some of them are rethinking, especially those dealing with problems of
scalability and availability. Also, dealing with heat densities and power
densities are big problems. You see a lot of CIOs moving towards blade servers
but as they bring them into the datacenter, they can't take advantage of the
condensed form factors of blades. If they fill the rack with blades, there is no
way they can cool it.
Why are the physical and power infrastructures of datacenters in most
enterprises oversized?
In general, people don't know how to prevent over-sizing because they don't
know the solutions. So they have this terrible problem of having to think 10
years in the future. So it is an impossible task. To make sure they don't get
stuck in a corner, people oversize and make sure they have enough power.
Typically, datacenters are either 10% utilized or 90% utilized. The people with
10% utilization rates are the ones who oversize, think of the future and put a
lot of padding in their estimates. People with 90% utilization are the ones with
little planning.
If the greatest challenge in datacenters today is to keep the rooms and
components within them cool, how do you check expensive overcooling?
It is definitely the biggest problem today because cooling today is done
using air. The biggest problem isn't generating the cold or moving the heat.
It is getting the cold air in the right place. And the right place in a
datacenter is right in front of the server. Today's datacenters typically have
raised floors. When the cold air comes out of the floors, it may go before the
rack and may move around the room, also mixing with the heat. So it is a very
inefficient way to actually deliver cold air to the servers.
|
On demand solutions is the way to check expensive overcooling. As you have
heat loads, you apply the cooling to it. Since distribution of air is a key
problem, you have to identify where the hot spots in your datacenter are. In a
typical data center, you don't have what is called 'heat profile'. I might
have one rack that is very hot as it is filled with blades or rack on service,
other service racks may not be as hot. What you need is a way to deliver more
cooling, more volume of air to the server.
Are we seeing the consolidation of business processes, applications and
infrastructure only in large enterprises?
I think we are seeing consolidation all over. It is easier to manage when
you have things in one place. So physically I can get to the systems, and the
fewer that I have, the fewer I need to maintain as far as patching the operating
system or keeping application updated is concerned.
What do CIOs need to focus on, particularly, on the application side of a
datacenter?
They need to make sure their datacenter infrastructure allows high
availability and roll out applications when business needs change. He has to
make sure no down time is experienced because of power or cooling problems.
Also, the demand from the business is to roll out applications very quickly. So
if the business needs new application within a few months, I can't say roll
out is not possible because I don't have enough room in my datacenter-or I
don't have enough power or cooling capacity.