Swaminathan was contended with the live election coverage on his favorite
news channel, as the anchor person effortlessly navigated through various
cities, speaking to reporters to give the latest updates.
But, little do viewers like Swami—nathan realize the amount of computing
power that helps the news broadcasting companies to manage the enormous number
of live streaming videos and integrate them in a desktop environment that aid
the anchorman.Â
The anchorman sits with a TFT monitor in front, with multiple windows of
various news bureaus on stand-by. And at the click of the mouse he goes on air
between Chennai and Chattisgarh, all thanks to High Performance Computing (HPC).
|
HPC is increasingly becoming a global standard in processing terabytes of
data at the blink of an eye. Relatively new in India, HPC has been the exclusive
domain of R&D, scientific, engineering, broadcasting and medical community.
Many such high-end visual-data intensive enterprises term high end computing as
life critical, as many a time computing in this sphere unfolds a new premise.
Take the case of a recent research conducted by Neuroscientists in University
of California (UCLA) in Los Angels and Queensland University, Australia.Â
The scientists by using powerful imaging analysis techniques created the world's
first three-dimensional maps depicting how Alzheimer's (a debilitating
brain disease) systematically eats up the human brain.
According to Paul Thompson, assistant professor of Neurology, David Geffen
School of Medicine, UCLA, "For the first time, the study unraveled
Alzheimer's progression in living patients. We were stunned to see a spreading
wave of tissue loss. Initially confined to memory areas, this tissue loss moved
across the brain like a wildfire, destroying more and more tissue as it
progressed. The study unfolded totally a new premise in managing Alzheimer's."
But to do that, the researchers had to depend on huge computing power. For
instance, they took MRI (Magnetic Resonance Imaging) scans of healthy and
diseased patients over a period of two years and acquired into a system, around
60,000 scanned points to compare the impact of Alzheimer's on the brain
tissue. The dramatic visual revelations brought to fore that Alzheimer patients
lost 5.3% of the brain's gray matter as against 0.9% gray matter decline on
healthy individuals. The entire time-lapse video was possible by using a
technology called Reality Monster graphics on a Silicon Graphics Onyx
visualization system.
The World of HPC
HPC is a concept that defies a singular definition. But the most popular one
accepted by the industry is the one given by William Blake, VP, HPC Group, US
House of Representatives Science Committee. According to Blake, "HPC is the
use of computers to produce information by simulating the behavior of the
products and processors or by algorithmic analysis of very large quantities of
data. This insatiable demand for computational performance in all its dimensions
drives the development of ever faster and more powerful computing machines and
HPC is the motor that drives the evolution of computer technology."
HPC is a world that is beyond the realms of traditional computing. It is an
extremely domain intensive space that uses high-end hardware and path breaking
software to simulate real life situations. In layman's terms, HPC takes a user
to a place or situation without going there physically. A classical case would
be the professional Flight Simulator programs that train commercial and combat
pilots on various flight scenarios on simulated cockpit on ground. Pilots gain
massive insight and maneuvers risky missions again and again until they reach a
level of perfection that comes to use in real life scenarios. This is just one
small front-end application of high performance computing. The bigger picture
out there comes from very niche industry domains.
Says Avinash Fotedar, director, marketing - Silicon Graphics India,
"Technical applications like weather and climate modeling, automotive crash
analysis, computational fluid dynamics analysis and various computer based
simulations have over the years created the high performance computing market
because these applications demand high levels of computing power."
Silicon Graphics is one of the pioneers of HPC. It is clearly one of the
leaders in the visualization segment and a major push to the HPC market was
given by the launch of its Altix 3000 series of servers. This has enabled the
availability of HPC platform to a much larger user base as the two major
components of this system - Intel Itanium processor and 64 bit Linux,
considerable reduces the total cost of ownership. But Silicon Graphics, does
have strong competition from vendors like Sun Microsystems and HP. Sun is also
one of the acknowledged leaders in the HPC space.
Says Anil Valluri, director, systems engineering, Sun Microsystems India,
"To put it simply, HPC brings to the table productivity, profitability,
highest-quality research. In today's competitive business environment one
needs a robust, flexible infrastructure that can scale horizontally and
vertically to provide maximum compute power and maximum data flow, all the way
from the R&D phase through production. You also need to prepare for the
future by laying the groundwork for a web-based, Grid Computing model.
|
Valluri's observation becomes more important when we look at the current
business dynamics. Driven by huge competitive pressures, organizations across
the world are today looking to implement HPC via Web Services, or an
"information utility" model using Grid Computing. "This approach
enables collaboration and uses the Internet to deliver computing and data
resources where they are needed, when they are needed. For example, in the
automotive or manufacturing area, this will allow them to reduce errors,
increase product quality, and bring products to market faster," adds
Valluri.
HPC-The Impact
Today many large IT vendors have identified HPC as the high growth segment.
Apart form the niche segment; many industry analysts claim that HPC can be
adopted by any enterprise. They cite that after years of customer acquisition,
many enterprise's IT infrastructure is choked with data and to free it they
need to have boxes that can compute and at the same time gives in revenue
optimization on their investments. So in a way it is ending the traditional 'just
add another server' attitude. Instead, the panacea HPC advocates is, add up
all the servers and allocate application bandwidths. For instance, high end
design applications gets the maximum bandwidth, while only data application gets
the bandwidth that is sufficient. So what we are seeing through HPC is cutting
down on the unused, spillover computing power and optimizing the IT
infrastructure's overall efficiency.
Says Valluri, "In today's business environment, the biggest IT
challenge is optimizing business productivity. Demand is growing rapidly on host
computers to communicate better with other computers. The solution lies in
parallel computing that enables applications to run on many clusters."
Now with the advent of grid technology, popularly called as Grid computing,
managing clusters is becoming increasingly easier. For instance, just like the
power grid that supplies electricity to the users as and when they need it. In a
similar fashion, grid computing is creating on demand availability of systems,
makes computing as utility. By web enabling, the whole process makes the IT
infrastructure pervasive. Like users can remotely use the computing resources
from a grid that can be termed as the central repository. At the heart of HPC
are clusters and grids that enable peak performance from the hardware and
software.
The impact of HPC is far reaching. Every vendor in this space has created
great value. Take the case of IT major HP, which is rated by market researcher
as the world's leading HPC player. HP gained the dominant position in the HPC
space by the Compaq acquisition. The high-end Alpha servers from Compaq have
added enormous value to HP. To further high performance computing, HP has
launched Accelerated Strategic Computing Initiative (ASCI).
|
To this end, HP has implemented the ASCI Q supercomputing system at the Los
Alamos National laboratory. Los Alamos, by using integrated program, conducts
nuclear tests by simulating and modeling the explosion, without underground
nuclear testing.
So every nuclear weapon made is tested for integrity. The Q system is the
second most powerful computer system in the world. It is also used for other
complicated tasks like weather modeling, tsunami watching, earth simulations,
and host of others.
With the rapid integration of scientific and business computing,
collaborative computing is becoming a competitive necessity. This includes
sharing of applications and optimizing computing resources. Improved integration
of tools and resources can have a dramatic effect on sales and profits of
enterprises also. Given this scenario high performance computing is all set to
play a critical role in creating the foundation for optimal use of resources.
Shrikanth G in Chennai
HPC Demystified
High Performance Technical Computing (HPTC), traditionally referred to as HPC,
is the application of technology to highly complex scientific, engineering, and
business analytics workloads. Examples of these applications include vehicle
crash simulations used to improve occupant safety; weather modeling used to
provide five-day forecasts; and DNA sequencing, or mapping the Human Genome used
to discover new drugs to combat diseases. HPC is classified as an area of high
compute, high bandwidth applications, and it is addressing the world's biggest
and toughest problems.
The difference between traditional computing, also called commercial
computing, and HPC is that in commercial computing, one is processing a lot of
small transactions or processes, whereas in HPC, it is all about computing
compute intensive processes. Commercial workloads need a computer capable of
doing high throughput like number of jobs
per second, whereas the HPC workload requires compute resource, capable of doing
high turnaround like how many seconds for a job to complete.
HPC Comes of Age
In a typical HPC scenario, the concept of grids is increasingly touted as the
nerve center that helps in churning out peak computing power out of the IT
resources, may it be hardware or software. Grid computing networks
geographically spread across computers and creates a virtual super computer that
is assumed as a single logical entity. So a scientist sitting in New Delhi
working on a gene modeling simulation will be actually using applications and
computing power from grid clusters spread across. Another example would be the
TerraGrid project in US that has linked computing facilities of five leading
academic institutions through a 40Gbit/sec optical pipe. The TerraGrid today has
created around 20 terraflops of computing power; one terraflops equals one
trillion floating-point operations per second. So with Grids, a whole new
dimension on HPC emerged, and it is creating a shared HPC backbone for workflow
that commands extreme computing power.
A case for HPC
Procter & Gamble
Imagine a consumer company like Procter & Gamble using HPC. P&G uses
computer-aided engineering (CAE) and uses HPC to ensure that its product and
production system innovations fit, work, and make financial sense. The HPC
architecture at P&G allows it to create and test prototypes of products and
the machines that manufacture them, in a virtual state — it allows P&G to
study the effects of product-line changes without building the actual equipment.
This resulted in millions of dollars in R&D cost savings and months to years
in development time savings. It also enabled P&G to pursue a wider range of
creative solutions to meet customer needs, without having to invest in costly
infrastructures. P&G's HPC architecture is made up of Silicon Graphics SGI
Origin 3000 with 128CPUs and 128GB of memory.
Tata Motors
The Company: Established in 1945 as the Tata Engineering and Locomotive
Company to manufacture locomotives and other engineering products, Tata Motors
is today one among the world's top 10 producers of commercial vehicles. Tata
Motors is the flagship company of the Tata Group with an annual turnover of
approximately $2.28 billion for 2002-2003. Its product range covers passenger
cars, multi-utility vehicles, light, medium, and heavy commercial vehicles for
goods and passenger transport. Seven out of 10 medium and heavy commercial
vehicles in India bear the Tata mark. The company enjoys a significant demand in
export markets such as Europe, Australia, South East Asia, the Middle East, and
Africa, with vehicles currently selling in over 70 countries
The Issue: The Company wanted to improve on its passenger car and
commercial vehicles design capabilities and at the same time reduce production
costs. Tata Motors was looking for a technology environment that efficiently
manages the growing complexity of data in the vehicle development process.
The Solution: Tata Motors realized that the key value of every car
produced lies in design. So it decided on a High Performance Computing
architecture that will refine its designing capabilities. It opted for Silicon
Graphics visualization solutions. Using SGI's Visual Area Networking and
Storage Area Networking (SAN) technologies it integrated Altix 3000 compute
servers Onyx 3900 visualization systems and enabled the solution available over
the network to designers throughout the organization.
The Result: Tata Motors seemed to have leveraged the high computing
infrastructure to the hilt. For instance, it made significant enhancement in its
vehicle manufacturing. The technology enabled the company to make improvements
in styling, passenger safety, structural analysis, noise, and vibration testing.
The company flagship car- Indica was totally designed using the SGI solutions.
As a result of Tata Motors emphasis on cutting edge technology, the Indica sales
reached 2.5 million sales mark within 52 months of its launch. Moreover, the 3D
collaborative environment at Tata Motors is the first of its kind in India and
it allows the company to simulate vehicle models using the advanced simulation
and visualization technologies. By making decisions based on digital models
rather than mock-ups, Tata Motors saves developmental time and money while
significantly reducing time to market.