Advertisment

Campus Network: Building a Strong Backbone

author-image
DQI Bureau
New Update

With the growing demand for audio and video applications online, there's a

need to create high-bandwidth communication networks connecting the user

desktops, computing servers, and information servers. If establishing such

networks and managing them with a high uptime is a challenge especially in big

R&D institute campuses, maintaining them against obsolescence is a much

bigger challenge.

Advertisment

Indira Gandhi center for Atomic Research (IGCAR), Kalpakkam, for example, has

a campus backbone network based on asynchronous transfer mode (ATM) technology

connecting about 50 local area networks (LAN) established in different

laboratories of the research center. These laboratories are spread all over the

big campus requiring about 10 Kms of fibre cable for connectivity. The LAN

established in these laboratories are of 10 Mbps and 100 Mbps Ethernet networks.

All these networks are connected to campus backbone using fibre cable.

The campus backbone network is based on a 3-tiered architecture.

It was so designed earlier, to reduce the distance between switches to be

in the permitted limits set by standards and to reduce the fiber cable length.

The network is based on ATM with a backbone speed of 155 Mbps (OC3). In

the first level, ATM Core switch with about 20 ATM (155 Mbps) fibre ports was

used as the Enterprise/Core switch. An MPOA (Multi Protocol over ATM) router was

connected to the ATM Enterprise switch for LANE and inter-VLAN routing. In the

second level, about 20 Workgroup switches with ATM uplink and 100 Base FX/100

Base TX Fast Ethernet down links were installed in different locations. In the

third level, about 50 Edge switches with 100 Base FX/100 Base TX Fast Ethernet

uplinks and 100 Base TX Fast Ethernet down links were placed in different

laboratories.

The whole campus network was established using composite fiber cable. The

Enterprise/Core switch was connected to the Workgroup switches using mostly the

multimode cores of the composite fibre cable.

In a few cases, where the distance was more than 2 Kms, Single mode fibre

cable was used for connecting the Enterprise switch to the Workgroup switches.

The Workgroup switches were connected to the Edge switches using the multimode

cores of the composite fibre cable.

Advertisment

The campus network provides connectivity of the desktop systems to

high-performance computing servers, information management servers, Internet,

e-mail, Web servers, digital library, etc. It is divided into about 20 VLANs for

better management and the inter VLAN routing is done at the level 1 through MPOA

router.

Why Upgrade?



The upgradation of the campus backbone network was necessitated due to

multiple reasons. Obsolescence of existing ATM hardware, cost of maintenance,

and the requirement of higher network speed. 

The campus network is extensively used by the scientists and engineers of the

centre, who need large amount of bandwidth for graphics-intensive scientific

applications, to get access to the high-performance computing servers,

information management servers, digital library, etc. It was required to provide

a network with speed that is needed for various services for their research

projects and also to scale easily to accommodate the future needs. 

Advertisment

Many new applications like audio, video streaming, high-end graphics are

becoming more popular.  Also it is

foreseen that desktop video conferencing, voice over IP will no longer be

technology demonstrators but will be serious applications. All this necessitates

the upgradation of the network speed.

Gigabit Ethernet



Once it is decided to upgrade the network, a thorough market survey was

done.  The favourite debate of the

network engineers of ATM vs. Gigabit could not be avoided.

Though ATM has its own advantages like quality of service, seamless

integration to wide area networks, Gigabit Ethernet becomes the obvious choice

for independent campus data networks.

The popularity of Ethernet in terms of technological knowledge base, its

downward compatibility or interoperability with its earlier generations, has

made the Ethernet the logical choice of all network designers.

Their high volume adaptation has resulted in unprecedented reduction in

the prices of all its components making itself a better favourite of all the

network users. The sheer volumes also

ensure a better maintenance support.

Advertisment

There is almost an order of price difference between ATM-OC12 and gigabit and

many orders of difference between the number of users of these technologies.

Hence Gigabit Ethernet technology has become a clear choice for implementation.

Designing an Network



It is a fact that designing a new network is far easier and interesting than

designing a retrofit.  The latter is

more complex, hence becomes a challenge.

Since the upgradation is for an existing working network, any change or

addition has to be retrofitted with the existing network. While doing so, one

should take into account various guidelines.

Advertisment

The upgradation must be transparent to the end user. There must not be any

requirement for any configuration changes in the user desktop PCs like IP

address, IPX address, Gateway, etc. To take care of this, one should plan to use

the similar configuration at the core switch like creating same type of VLANs

with same gateway interface address. For example, if the existing setup has VLAN

X with interface gateway as 10.10.1.1 and IPX network number as 0x00000010, then

the new setup will have the same configuration.

As the upgradation is for a network that is mainly used by the scientists and

engineers for research, the downtime of the network must be as low as possible.

Since the downtime of the network must be least, both the networks would

co-exist for some period of time, mainly during transition, so that individual

Local Area Networks could be shifted one by one from ATM to Gigabit.

Also, laying new cables is both labor-intensive and costly.

As optical fibres are composite cables, they provide the required single

mode fibres to support the Gigabit Ethernet for the required distances.

In the existing setup, most of the workgroup switches are connected through

the multimode fibre cable, whereas the Gigabit operation on multimode fibre is

limited to 220 m using SX transceiver and 550 m with LX transceiver and mode

conditioned patch cords. But in the campus, most of the workgroup switches are

placed in the buildings whose distance is more than 500 m from the enterprise

switch. Hence it is not possible to use the multimode fibre cores for gigabit

operation and it is required to use the single mode fibre core. With composite

fibre cable, it is possible to terminate the single mode fibre cores with

connecters and use them for gigabit operation.

Advertisment

Hence, the first step was identifying the segments where switch over from

Multimode to Single Mode was required. Except for a few locations, most of the

70 segments require a switch over to single mode.

The next decision was regarding the extent of upgradation — whether to

replace only those switches dealing with ATM and retain the remaining fast

Ethernet switches or replace all the Work Group Switches.

It was decided to go in for the complete replacement of Work Group

Switches with gigabit Ethernet switches to provide gigabit speeds in every

segment of the campus network so that all the servers placed in various

buildings work with gigabit speed.

Configuration Issues



The Enterprise Switch, Work Group Switches, and Edge Switches are configured

with appropriate network addresses. The whole network is divided into 20 VLANs,

and each VLAN is given a VLAN ID and a VLAN name. It was planned to use the

port-based VLAN concept in the network, at all the levels. Previously it was

port based VLAN in the workgroup and edge-level and subnet-based VLAN in the

core switch.

The edge switches are configured for the required VLAN based on the location,

in which it has to be fixed, by configuring the ports of the switch for that

VLAN id.  The default passwords are

changed, the telnet and Web access are disabled. The workgroup switches are also

configured in the same way. In all the switches, there is more than one VLAN.

All the required VLANs are configured and the ports are allocated for the VLANs.

Advertisment
To

Design a Network

It's choosing the

appropriate Enterprise, Work Group, and Edge Switches that can be tricky.

The guiding factors are

  • The transition has

    to be transparent but for higher speed

  • Since it is just a

    backbone network connecting more than 50 LANs, all the protocols

    including legacy ones and the futuristic ones required for data

    communication shall be supported

  • Since for an

    R&D organization, the switches shall cater to the needs of the

    engineers and scientists at least for next five years in terms of

    growing demand for higher network speed and new applications

  • The Enterprise

    switch shall support certain amount security features to protect

    various servers though the security is taken care through many other

    mechanisms

  • The cost

In the enterprise switch, the 20 VLANs are created. The default passwords are

changed, the telnet and Web access are disabled in the core switch for security

reasons. For Network management, SNMP v3 is used. So a username with password is

created and DES is used for encryption. SNMP trap is configured to send the

traps to the NMS station.

The IP and IPX network address are configured as and when the VLANs are

shifted from the ATM network to the new network, this is mainly because the same

gateway address has to be configured for the VLAN. So when a VLAN is shifted,

the interface address of that particular VLAN is deleted from the ATM switch and

added in the Gigabit core switch. Till the last VLAN is shifted, a static route

is added which points to the ATM switch for inter VLAN communication and

external communication for all VLAN members. After all the VLANs are shifted,

the static route is changed to point the firewall for external communication.

Coexistence of the ATM backbone and Gigabit backbone with same

configuration is done as follows:

The Backbone



The upgraded campus backbone network setup is similar to the old setup, with

3-tier architecture. But now the backbone is of Gigabit technology with a speed

of 1000 Mbps. In the first level, enterprise switch with 24 fibre GBIC ports is

used as the Core switch. In the second level, about 20 Workgroup switches with

24 Gigabit ports are placed in different locations. In the third level, about 50

Edge/Access switches with Gigabit (fibre or copper) uplinks and few gigabit

ports for server connectivity and 100 Base TX Fast Ethernet downlinks are placed

in various laboratories.

The Enterprise switch was connected to the Workgroup switches using the

single mode cores of the composite fibre cable. The Workgroup switches are

connected to the Edge switches using the multimode cores or the single mode

cores of the composite fibre cable based on the distance between the switches.

For managing the complete network, a state of the art Network Management

Software supporting SNMP V3 was commissioned.

First the network was thoroughly tested to ensure the functional

requirements. The various in-house applications were used to ensure that all

services and protocols are working satisfactorily. Open source tools like

“Ethereal” were used to trace various services and protocol communications

and ensure that they are working as per standards.

Ethereal is a package, which configures the System for promiscuous mode

operation and collects all the packets on the network and displays on the

monitor.  It has various features to

filter the data in terms of protocol, IP address, etc.

The performance of the campus network was tested using the open source

software named Net PIPE.  NetPIPE

(Network Protocol Independent Performance Evaluator) is a protocol- independent

performance tool for comparing different networks and protocols. NetPIPE

performs simple ping-pong tests, bouncing messages of increasing size between

two processes, whether across a network or within an SMP system. Message sizes

are chosen at regular intervals, and with slight perturbations, to provide a

complete test of the communication system. Each data point involves many

ping-pong tests to provide an accurate timing. It also has an option to measure

performance without cache effects. The NetPIPE tool performs other evaluation

functions, too.

SAV Satya Murty, K Vijaykumar, TS

Kavithamani, P Selvaraj, Jemimah Ebenezer, U Malliga, S Athinarayanan, P

Swaminathan



Indira Gandhi Centre for Atomic Research, Kalpakkam






dqindiamail@cybermedia.co.in

Advertisment