If processing power doubles every 18 months as per
Moore’s law, what about the other aspects of computing? For instance, is there
a law for storage–the most critical part in enterprise computing? If yes, we
havent heard of it. And that’s why storage is the most talked about and least
understood frontier among enterprises. Most learn the hard way, only a crisis
bringing to fore the absence of a robust and comprehensive storage
infrastructure. However, after 9.11, many companies have realized the
criticality of data and are hence putting in place some or the other form of
storage architecture. With more emphasis on infrastructure augmentation, Year
2002 can be called "introspective" for many organizations. As a
result, IT spend in segments like storage remained stable throughout the year.
According to industry estimates, around 35% of any enterprise’s IT budget will
go toward storage-related expenses in 2003.
|
Data has become mission critical–in some cases, a lifesaver. To protect
data, CIOs to-day have to evolve a comprehensive storage management policy. And
this policy that they come up with has to be in sync with the organization’s
business goals. A Frost&Sullivan study suggests that a good storage
management policy begins with the definition of storage limits or quotas for the
users, so that everyone across the organization gets proper storage facilities.
|
This brings in one very critical question–How does one effectively manage
storage? Before searching for answers, lets look at the current state of storage
in most Indian enterprises. Two scenarios emerge here–One is a total lack of
storage awareness... the other, the inability to utilize whatever storage
facilities are available. In order to devise storage architecture, an enterprise
needs to assess two factors that will clearly bring forth the state of storage
bandwidth it has, and will set stage for the next level of maturity. Also, many
organizations fail to utilize the existing storage due to a combination of
reasons and as a result most of them end up using only 60% of the storage space.
This is mainly because most storage space is often attached to servers and,
hence, spare capacities in multiple servers becomes redundant.
Managing storage: Beyond DAS
Once the initial data mapping is done, the crucial part is in identifying
the right kind of storage. Storage buying should be driven by the organization’s
business goals rather than by popular perception. Here enterprises have multiple
options before them and the three major topologies–DAS, NAS, and SAN are still
the most preferred form of storage. However Direct Attached Storage (DAS), the
most common form is slowly loosing its relevance and Network Attached Storage
(NAS) and Storage Area Networks (SAN) are gaining ground. DAS might hold well
for SMEs, but when the storage level exceeds the million-megabyte mark, then it
is time to migrate to a NAS architecture that solves many problems associated
with DAS.
The advantage of NAS is TCO and ease of use. For instance it can be
configured to the existing client set up in minutes due to its plug and play
nature. Any organization aiming for effective server management and file sharing
should go for NAS.
However, since NAS runs on corporate LAN, network congestion happens and
might lead to downtime, but effective network management and bandwidth
allocation will often fix the problem.
|
NAS-SAN convergence
Traditionally, in a SAN setup, data gets transferred over a storage loop to
the various devices connected with the SAN. The storage media for SAN is based
on the SCSI disks, tape drives, and off late fiber channel interface drives are
being used.
SAN is the most preferred set up that liberates the organization from storage
hassles by providing a highly scalable architecture. Since SAN is based on a
dedicated network, it liberates the LAN and replaces SCSI based interconnected
storage devices with a fiber channel that brings in data transfer speeds to the
tune of 100MB per second and a typical SAN can support up to 128 devices and
with switching technologies this can be scaled to thousands of storage devices.
However such raw power does come with some issues. Managing a SAN is not
easy, particular when an organization has multiple server platforms. But
interoperability issues will no longer hamper SAN adoption because the Storage
Networking Industry Association, in association with 11 major storage vendors,
has created specifications that should be followed in a SAN implementation. The
specifications will enable administrators to use a common set of storage
management tools to manage software and hardware from different vendors.
A recent development in the storage industry is the convergence of NAS and
SAN. Today, NAS is seen as a prelude to SAN implementation. Moreover experts
suggest that the trend will move towards making NAS as the front end and SAN as
the backend of the storage infrastructure. The key driver that is propelling
this convergence is the advantage of NAS in the file-sharing environment,
whereas in a SAN it is not possible. Hence, adding NAS as one of the
functionalities of the SAN makes better sense. The question is–how can
competing architectures converge? They can. For instance, there’s a clear
difference between shared storage and shared files. While SAN shares the storage
resources through a common network, NAS shares files through an IP network. By
uniting the two concepts, files can be accessed through NAS and delivered by
SAN. When these two converge, we have a fourth topology–consolidation or
shared storage.
Shared storage
In simple terms, shared storage is nothing but sharing of contents between
multiple client entities by putting in place a combination of technologies at
work–like NAS, SAN, tape, virtualization etc. The idea behind a shared storage
is to consolidate the gamut of data that runs through the enterprise. For
instance in a robust data-sharing environment, users will be able to share data
irrespective of the platform on which it runs. For example a Windows application
can be shared with a user running UNIX. This seamless integration is the major
advantage of shared storage. Lets sample another scenario–when an enterprise
with a typical DAS architecture has a large number of RAIDS (Redundant Array of
Independent Disks), by consolidation, the number of RAID systems can be narrowed
down to two or three units. The return on this kind of RAID revamp will be the
drastic increase in the per user storage allocation which is between ten to 15
times after consolidation.
Virtualization
Consolidation of storage into a centralized storage repository happens
through virtualization. In a shared environment, storage solutions from multiple
vendors and platforms are involved. A typical virtualization will group all
these multiple storage areas into one single entity. Virtualization sets the
stage for effective data management as it separates space hungry data intensive
apps and low bandwidth applications. This separation makes it possible to run
mission critical applications with relative ease.
According to Aberdeen Group, virtualization eliminates server downtime by
about 30%, since it can be re-configured dynamically.
Storage resource management
Virtualization cannot happen on its own, and the key driver here is storage
resource management (SRM) software. This enables virtualization by taking the
inventory of the physical assets (the number of storage devices) and digital
assets (the volume of data that resides in the devices) and prescribes various
parameters and allocation to the storage administrators in effectively managing
the storage resources. Also with the advent of storage controllers, shared
storage and virtualization has become easier.
As IT usage and automation increases, enterprises are prioritizing storage.
This is because the organization’s entire digital assets reside in the storage
infrastructure. If data has to be accessed from anywhere and at anytime, the
storage architecture a CIO is putting in place makes all the difference. As an
Aberdeen research aptly sums up, " A storage infrastructure has to
fundamentally do three things–store data, move data where needed and make the
previous two manageable. Probably the biggest challenge a CIO has to deal with
is managing these three effectively and arriving at a storage management
infrastructure!
3 Steps to Effective Storage Management
Step 1: Taking stock: Determine the
storage needs by mapping the wide-ranging processes by assessing the…
l Volume and
criticality of data to be backed up and the backup window available
l Volume growth of
data and the growth down the line.
l Distribution and
consumption patterns of data across the enterprise
l Information on
applications and data generated by these applications
l Availability of
network bandwidth (LAN and WAN)
l Availability of
skilled staff and funds
Step 2: Decision-making: Narrow down on
one or a combination of storage. Options available are:
l Deployment of a
Storage Area Network (SAN) or Network Attached Storage (NAS) or Consolidation.
l Putting in place a
tape device or an automated tape library (in case of high data volumes)
l Selection of
storage software required. This depends on the kind of application, database,
and operating systems the enterprise is running or proposes to implement.
Step 3: Empirical analysis: This is the
crucial part. The CIO has to ascertain the storage solution selected addresses
the following aspects:
l Backup and restore
speed
l Ease of
integration and scalability
l Ease of
installation, configuration, and operation
Source: Industry and market intelligence reports