Dr Reza Rooholamini, Director of Enterprise Solutions, Dell Product Group, is
an authority in the field of hi-performance cluster computing-processor
clusters built from standard hardware and software building blocks. The former
professor of computer systems at the University of Wisconsin currently manages
the operating systems, cluster, database, and custom solutions teams at Dell
Computer Corporation. Dr Rooholamini spoke with Ravi Menon about the growing
role of cluster computing in the enterprise and emerging prospects for cheaper,
hi-density blade servers in the data center:
How useful is a supercomputing game like top500.org on the enterprise
side, where provisioning is more balanced and practical?
There are some applications where speed does matter, where you need to have
systems supporting more back-office enterprise applications, more balanced
integration. Today, on the enterprise side, speed matters as much as storage
capability or availability.
We have seen the industry's demand for high-availability, high-performance
computing clusters shooting up with the pace of SAN apps evolution, nodes
ranging from 8 to 4,000 in number, always-on storage and data retrieval systems
coming into the enterprise.
|
Specifically, in the top500.org, 291 are HPCC-based. HPCCs cost approx 10% of
the price of a typical supercomputer and are growing into strong
proof-of-concept. Today, we are able to offer HPCCs in packages as corporate
solutions where supercomputing is available on tap, where it is possible to
acquire and monitor HPCC systems easily.
How are systems and CPU vendors responding to the enterprise growing needs
for computing power?
It will continue to be a question of fault tolerance and high availability.
I remember the days when everybody was talking about realizing higher systems
availability by using more processors and chip components. It no longer makes
technical or business sense to create a machine with further processor-level
padding. Today, it makes more sense to increase availability and processing
power by increasing CPU density at the server level and scale up according to
needs by adding on blade servers to manage the load at the data processing and
storage levels. It's the modular route for the data center and it makes
perfect sense.
In the data center market, AMD, IBM, and Sun are competing with Intel by
differentiating on five system architecture elements: CPU design; memory
performance enhancement; I/O and interconnect improvements; OS code in hardware;
and improving virtualization, reliability, and power management. Where does Dell
stand?
Well, we have been moving on similar lines, with far more consistency,
though. Besides I/O fine-tuning, isolated partitioning of OSs and apps,
virtualization and reliability metrics, we are seeing demand in the
individualization of server space-we are developing the ability to consolidate
multiple box server environments into a single piece of box server hardware-to
create separate virtual machines running on different platforms.
A key boost to virtualization efforts at Dell has been our partnership with
VMWare, whose virtualization software has come in use to help us partition
x86-based workstations and servers into separate virtual machines, each
containing their own copy of the OS. Supporting Windows, Linux and NetWare,
VMware's virtualization software resides as a layer between the hardware and
the virtual machine partitions. This helps users work with multiple operating
systems. Providing load balancing within the same machine as well as separating
unstable testing environments from production environments is now a reality-driven
by the growing demand for virtualization from data centers. The focus here will
be more on easy monitoring, application testing and analysis by consolidating
physical server architecture.
How can blade servers redesign future data centers?
Where modularity and flexibility of systems go, we see huge prospects for
the 'blade' in the data center, on the configuration and management side,
where the real demand lies. Being highly modular box servers, blade servers
offer three to 10 times the density of conventional servers. They are providing
substantial improvements in management cost and systems integration cost. We are
now seeing the ability to gain processing density, and scale up blade server
configurations simultaneously, at the same time, realizing huge cost savings for
the enterprise.
With lower cost Athlon becoming popular, do you see alternatives to the
Wintel platform-such as blade PCs and Linux desktops-finally gaining some
traction?
Yes, we have seen a good amount of proprietary platforms in the enterprise
running on Linux, like Oracle for instance, not to mention the increasing
migration of Unix workloads to Linux platforms. We currently ship Linux on our
workstations and server boxes. As a natural corollary, we will see Linux gaining
a greater presence on the desktop. Desktop Linux is right now more of an
enterprise play and we will see a lot more traction in the enterprise here
during 2005, as compared to blade PCs.
(For the full interview log on to www.dqindia.com)