Advertisment

Look Before You Buy!

author-image
DQI Bureau
New Update


Advertisment

“It’s time to break the myth about processor speed and cache”

BY Kamal Dutta

I will mostly focus on the selection of the high-end servers,

typically RISC-based systems, because more often than not they are the most

complex. While the selection criteria for servers will keep changing based on

the various application considerations and also on the business needs of the

customer, let’s focus on a couple of typical parameters of evaluation and

their impact on the organization’s IT strategy.

Performance



Server performance is based on multiple factors like processor performance,

system bandwidth, memory latency and amount of different levels of cache, I/O

bandwidths, and I/O throughput. Processor performance is the easiest of the lot.

There are many processor benchmarks available–like Specification Performance

Evaluation Council–which provide various results like integer performance,

floating point performance and Web performance, all of which are a good measure

of comparing different RISC-based processors. Technical users can go for more

fancy benchmarks like Linpack.

Advertisment

Let’s break the myths of processor speed and cache!

Processor frequency is only useful when comparing the same architecture, but it’s

futile when comparing different RISC processors from different vendors with

different architecture. The same holds true for cache. Other performance

parameters are equally important and fortunately, they all have a number value

which can be directly compared–the system bandwidth, if 4GB/sec, is an

absolute number and the higher the better. However, if the two systems have the

same system bandwidth and vary in the number of processors, a better system

would be that which gives the same system performance with fewer processors, as

more processors will consume more bandwidth for internal data synchronization.

Scalability and RoI



Why am I mentioning both these together is simple, but often lesser

understood. The reason one looks at scalability is to ensure that as the user

data, number of users and applications grow, the server needs to keep pace and

grow too. In short, one would not just like to go ahead and replace the server

when the needs of the user grow. Hence, I can say that the reasons why

scalability is important is because you want to protect the investment that you

are making today on the server.

Measuring scalability is simple–you just need to know how

many more processors you can add. However, it can get more complex than that.

Let’s say that if you were to upgrade to a newer performance level after two

years, you will have two choices. One, to buy the older processors and scale up

the system, the only problem being that these older processors are likely to be

more expensive than the current generation of processors because of economies of

scale. So, you will pay more and probably get one-third of the performance of

current-generation That is not investment protection. The second option is to

replace the processors and maybe even the memory. You might as well buy a new

system. The answer lies in whether your server can be scaled up to the next

generation of processors without discarding the old ones.

Today, this feature is available with hardware partitioning.

Binary compatibility ensures that the operating system need not be changed and

hence is an important consideration for scalability. It is also important to

ensure that you understand the processor roadmap of the servers well to

understand and decide on these issues.

The author is country business manager, Unix Servers &

Solutions, HP India

Advertisment