Advertisment

Bandwidth Optimization

author-image
DQI Bureau
New Update

It is 1998, and suddenly, networks are the

'in-thing' everywhere. Previously unconnected, standalone machines suddenly find

themselves connected to every other machine in the company, happily communicating. Many

such networks are connected to even larger ones, typically the Internet or their

parent-company's WAN.

Advertisment

Not only has the attitude toward networks

changed, but the applications have, too. The traditional file and print services now find

themselves crowded by email, Internet/intranet, and specialized applications. And

suddenly, LAN administrators find themselves facing a completely new problem-slowdowns on

the LAN caused by non-traditional network traffic.

Not surprisingly, this new phenomenon has

vendors drooling. In no time at all, they are there at your doorstep trying to sell you

the concepts of 100 Mbps and even Gigabit per second networks. Alongwith that comes the

usual repertoire of equipment you need to change-hubs, ethernet switches, routers....

But is all this really necessary?

Advertisment

Surprisingly, it isn't. Even 10 Mbps

networks are capable of delivering acceptable performance. What is really required is an

understanding of how the bandwidth (i.e. the network 'pipe') is currently being misused.

Divide And Rule



Almost every network today uses TCP/IP as a primary protocol. This is good, because TCP/IP
has proven itself over the years as a stable and efficient transport mechanism. Even

vendors such as Novell and Microsoft, who for years tried to make their proprietary

protocols (IPX and Netbeui) the corporate standard have now thrown in the towel and the

latest incarnations of their Network Operating Systems (NOS) use TCP/IP by default.

However, TCP/IP is not the panacea for all

network problems. Wrongly implemented, it will quickly drive your network to its knees.

Advertisment

A very common problem is that of

single-segment LANs. Every PC in the entire company is treated as part of a single network

group (or segment), causing every piece of traffic from any one PC on the network to hit

every network card in every PC on the LAN! This means that if someone is sending a 1 MB

file from his PC to a server, the traffic goes via every PC in the network. If enough

people do this, the LAN quickly slows down to a crawl.The answer to this problem is

segmentation-either physically (via intelligent Ethernet switches) or logically (using

TCP/IP sub-netting). The TCP/IP-based sub-netting is probably the most economical route as

no additional equipment is required.

In a TCP/IP sub-net, PCs are configured

into logical groups (known as sub-nets), so that traffic from any PC within that group

hits only PCs in its own sub-net (usually a small number), and then travels out via a

gateway machine to its intended destination (if the destination is not within the

sub-net). The net effect of this kind of arrangement is an immediately noticeable drop in

traffic.Sub-netting is a time-honored method of optimizing LAN traffic, and well

documented.

Internet/Intranet Issues



Much of today's network congestion is caused by web traffic. Web traffic can be local (on
the intranet) as well as from the Internet. Huge graphics, Java applets, and tons of

information pouring in can quickly flood even a decent-sized network link. Added to this,

are DNS lookup requests which in themselves are pretty small, but are so numerous that it

isn't uncommon to see several hundred DNS requests open simultaneously! Many of these

issues can be addressed by building in some intelligence into the network design.

Advertisment

For example, much of the web traffic tends

to be repetitive-users pick up the same pages. Yet, each page is transferred again and

again from the remote server. This problem can be quickly resolved by using a web proxy

server that caches pages. Repeat requests for pages will then be served to the requesting

user from the proxy-cache instead of being pulled again from the remote server, resulting

in vastly improved response times.

Many networks are set up to use a remote

DNS server-typically on the Internet (where everyone is set to query VSNL's DNS servers)

or the parent network's master DNS (in case of private WANs). This is highly un-optimal,

and can quickly be resolved by setting up a local caching DNS server (such as supplied

with Windows NT, Unix etc.).

In all such cases, it is important to

ensure that client machines on the LAN are set up to use these services. The easiest way

to ensure this is to use DHCP (Dynamic Host Configuration Protocol) services provided by

virtually every Network OS these days. DHCP will set the client machine to use the correct

servers (such as DNS, web proxy etc.), and will also make network administration much

simpler and centralized.

Advertisment

Email Issues



Many LANs now boast email for every user of the LAN. The advantages of this cannot be
stressed strongly enough. However, this facility rapidly finds itself misused, leading to

excessive network traffic. One of the biggest culprits is the tendency by users to send

large, uncompressed file attachments via email. There are usually better ways of doing

this (like copying the file into a common area on a server and then asking people to pick

it up from there via a short email message), but the convenience of email makes people

choose this route. Corporate policy should strongly come down on offenders.

The next biggest problem is Carbon Copies.

It is not unusual to see messages being sent to dozens of recipients, even though the

message itself is meant for only one or two persons. Matters get worse when there is an

attachment involved-users do not realize that a message with a 1 MB attachment sent to 10

people results in 10 MB of network traffic. This is a problem that quickly escalates as

people start replying to such messages. The answer to this is again corporate policy and

user education.

There are plenty of ways of optimizing

network performance. Listed here are just a few optimization methods-a little bit of

research will quickly throw many more. Unfortunately, the one most visible to people is

the increase in network bandwidth. In almost all cases, this is completely unjustifiable.

Simple network design, optimization, and

reconfiguration can make an ordinary 10 Mbps network perform much faster than a badly

designed 100 Mbps network, and the cost benefits are, of course, huge. Similarly, a

'lowly' 64 Kbps link to the Internet or a WAN, connected to a well-designed LAN, can offer

performance better than a badly set up LAN connected to a 128 Kbps link!

Advertisment