FOCUS: INFRASTRUCTURE MANAGEMENT: End-to-end Control

author-image
DQI Bureau
New Update

A holistic outlook is essential for effective en-terprise
management. Any organization can reap the benefits derived from integrating
management of all IT resources by overlaying a common object-oriented
infrastructure across multiple lines of business. The goal is to be able to
manage all of your IT resources from anywhere in your operation at any time. It
should be easy for administrators or, ideally, non-technical management
personnel to behave as if they are on a single large computer, or to focus only
on those resources that are relevant to their specific jobs. This envisions a
truly heterogeneous system that really does allow the management of anything
from anywhere.

Advertisment

Standard infrastructure management products are based
on frameworks. The capabilities of such products encompasses: automated software
distribution, distributed application management and performance tuning,
database administration, network management, output management, storage
management, automated back-up and recovery, help-desk services, and security
amongst others. The idea behind the frameworks is to gather all these functions
under a single umbrella. Individual modules manage and automate the above
functions. All the modules then plug into the framework which provides a
centralized management console which becomes the infrastructure management
cockpit.

While enterprise systems management is the most complete form of managing IT
infrastructure, one can also look at the concept in discrete parts —
application management, performance management, asset management, back-up
management, mail management, server management, desktop management, storage
management, security management, network management, and the like. Of these, we
would look at storage and network management, two large chunks of
infrastructure, in bit more detail.

Storage management

As a background one knows about the three types of storage: direct attached
storage (DAS), network-attached storage (NAS), and storage area networks (SAN).
Estimates are that storage can account for anywhere from 22% to over 50 % of the
total IT budget. Hence, senior management has been paying more attention to the
area of storage within the IT plan. For one, storage requirements are rapidly
scaling up. Next, with mission-critical applications like ERP, SCM, CRM, BI in
place, the dependence on data and information available online is higher than
ever before. Therefore, companies have started to have a planned approach for
storage, a 3 -year plan or a 5- year plan. They are seeking answers to questions
like : how much would storage grow by next year, what technologies would help
manage this growth, how can I have two-fold increase in storage without adding
headcount, what technologies would give me the headroom for growth, how much
should I budget for storage in my disaster-recovery site, etc. Networked
storage, more often the SAN, is coming out as the answer. There are three
technological advantages: tools can dynamically allocate more capacity on the
network to specific applications, tools can map all devices on an SAN and
monitor errors, and tools can make various devices and their storage capacities
seem like one logical unit. The last one is a powerful concept called as storage
virtualization.

Advertisment

The hardware part of the storage resource aside, from a management
standpoint, there is wide range of capabilities possible through software. At
one end of the software function range, file systems provide basic tools to
sustain some degree of data and application availability. For example,
file-system utilities (e.g., Chkdsk) let you scan all file metadata to look for
inconsistencies in the file system and take action where necessary. Because such
system utilities maintain some degree of data coherence, they comprise a form of
storage management.

At the other end of the range, failover management software, or
server-clustering software, monitors applications, restarts application
components, and redirects I/O to appropriate alternate locations if one server
fails to perform its functions. Clustering software synchronizes data and
applications between clustered servers by mirroring data, metadata, and
application-related registry entries. Good failover-management software should
notify you when a failure occurs and should provide information about any
changes in the state of the environment. These tools should be able to resume
business processes without manual intervention. Tools that let you view and
configure the environment, either locally or remotely, add to the ease with
which you can resume processes and applications.

In addition, storage management tools such as hierarchial storage management
(HSM) software and data-replication software contribute to high application and
data availability. HSM based on application-specific policies lets you move
critical data to protected secondary storage while also ensuring transparent
access to the data. Clustering HSM servers increases the speed with which users
and applications can get to their data in the event of most types of server
failures.

Advertisment

Software-based data replication technologies add a level of intelligence to
basic hardware replication. You can manage copies of data independently and
according to the needs of the application. Good replication technologies should
not only replicate files but also directories, volumes, shares, and select
registry keys, from one to many servers. Deploying replication software should
ensure rapid recovery of the most critical servers.

Network management

Network management should function as a cohesive whole, just as the network
itself functions as a single entity, even though it may be made up of many
parts. A single tool should manage all of those parts. No matter how many nodes,
sites or standalones, a single network management tool can provide seamless and
invisible oversight, interaction and response. In order to truly manage your
network, you need a tool that can analyze historical performance data and create
a unique "system personality profile." This will allow for advanced
warning of critical situations affecting your most crucial systems. Obviously,
predicting system failures before they occur helps to maintain and even increase
revenue-generating activities while minimizing the costs associated with system
downtime.

The core set of management functions needed for network and systems
management includes security, scheduling and workload, storage, performance,
output, resource accounting and charge-back, problem management and complete
event control. Your network management tool should provide all of these, as well
as virus scan capabilities to ensure the safety of the environment. It must
track, filter, correlate and forward events, as well as take automated action in
response to those events. Any response should also be contained at the lowest
possible level.

Advertisment

The network management tool should chug along in the background. And all of
this should go on while your customers and employees conduct business as usual.
Having identified the most critical functions and predicted the most probable
locations and types of failures, your network management system will have been
built to respond with as little human intervention as possible. It will notify
your systems personnel when it is in need of their assistance. It will only warn
higher levels of management on an as-needed basis. There is no need for the CIO
to be called in for every incident.

As your business grows and expands, so must your network. End users must have
access to increasing numbers of applications, and this increased data exchange
will put pressure on bandwidth. You will need both a common user interface and
advanced traffic-flow/bandwidth management. You want to work with someone
knowledgeable and experienced in designing long-term network management
strategies. The marketplace is becoming increasingly "networked", with
e-business becoming the rule, rather than the exception. Not only those
organizations on the worldwide Web are impacted, either. Internal systems
management is as critical as anything external. Neither the CIO nor the end user
cares whether the problem is with an interface card, a router, a frame relay
link or an ATM backbone. The system’s just "broke." They look at and
use the network as a cohesive whole, delivering applications and services to end
users who have specific business-related tasks to perform.

Is there a choice?

As a white paper on the BMC Software website says, infrastructure management
is the management of "platforms, databases, storage, networks, security and
middleware technologies." BMC would like you to consider, of course, its
Patrol product for such monitoring, but you don’t have to use that one. In
fact, before you decide that you need to go with a very expensive
infrastructure-management product, you should consider that you can start small,
with a management system that you roll on your own, using some easy programming
and simple network management protocol (SNMP) that can alert you when something
is going wrong with your network infrastructure. Then you can, in effect,
instrument your applications, and especially your databases, using tools built
in to your operating system, and gathering the information for later analysis.

Advertisment

Computer Associates, however, points out that investing in infrastructure
management will reap big benefits. In a white paper on the company Web site
authored by consultants IDC, the conclusion from a study of 30 companies that
employed CA’s Unicenter product for infrastructure management was that these
multi-billion-dollar companies averaged savings of over $8 million and that they
experienced a payback period of 55 days with an return on investment of 663%.

Easwardas Satyan

Best Practices Index: Infrastructure Management

Advertisment

These five pointers can take you to an optimum infrastructure management
status, fast...


n
 The fact that a
good management system ensures reliable user access to business applications has
been proven. Those organizations that relied on a best-of-breed approach and the
inherent management capabilities of individual infrastructure components (
systems, network, database, applications) have not been able to ensure reliable
user access.

n As the
infrastructure grows in terms of users, applications, partnerships, CIOs have to
regularly review and study the sensitivity of the infrastructure to the
transaction load. This will ensure uptime and scalability without the
infrastructure buckling.

n New business
models have introduced new partnerships and constituencies within and beyond the
enterprise. Eventually these will not only be collaborative in nature but also
transactional. This would call for a fundamental rethink of the architecture. A
flexible component-based architecture based on standards will come into being.

n The role of the IT
organization in managing the infrastructure will change to providing measurable
business services and service level agreements to business organization- even if
it is an in-house operation.

n The line between
technology infrastructure and business processes will dissolve. Infrastructure
will be structured around business processes rather than technology.

EASWARDAS SATYAN