Data Centers: No More Dull and Dark

Tulip Telecom’s newly inaugurated data center which is named as Tulip Data City’ (TDC) is the 3rd largest data center in the world and the largest in Asia. In the first look, it comes as any office building with unusually high security or any premium hotel with 2 huge transformers standing like watchguards, as you enter. The data city is state-of-the-art by design and also business model.

As already reported in the media, Tulip Telecom is expecting to generate around ‘1,000 crore of annual revenue in 3 years and has already bagged 3 deals worth ‘600 crore on annual basis from HP, NTT, and IBM.

Tulip already has around 5 data centers in the country located in Mumbai (2), Delhi (1), Bengaluru (1), and Kolkata (1). TDC would act as their DR center and vice versa. Tulip is looking at the various customers’ engagement models stating with co-location to hosting applications, looking at entire floor plate for large customers, further they could avail the infrastructure as a service (IaaS). They have also partnered with EMC to offer storage as a service and they could also provide disaster recovery and managed services.

 

[image_library_tag 717/2717, class=”left” title=”now37″ alt=”now37″ ,default]

 

 

 

 

Keeping with the Trend

Harjyot Soni, CTO, Tulip Telecom says, “Building such a large facility is actually keeping with the trend in data center market where in order to manage the facility and bring economy of scale. Smaller data centers are now consolidating to large facilities across the globe and we believe that building such a large facility would hold much value in India as well. Tulip has successfully created a truly world-class facility, which would definitely trigger a rapid growth of cloud in India and act as a platform to roll out numerous services including cloud computing, SaaS, and managed applications to name a few.”

According to him, the project was completed in record time of 10 months, where the designing took around 4 months and the execution phase took 7 months. The team faced multiple challenges in this, right from design phase to power back-up systems and many others.

The data center has been designed based on the recommendations made by the telecommunication industry associations. Drafted in 2005, TIA 942 is the first standard to address data center infrastructure. It covers the various aspects like space/layout, cabling infrastructure, redundancy, environment conditions, energy, etc.

“During the design phase, design of power distribution system was one of the key challenges, since this facility is designed to consume about 100 MW of power and an efficient distribution of this much power was a critical design challenge. Also, power back-up systems for such a high load demanded for the right size of back-up systems and selection of these systems required significant attention. Another major challenge faced was during the initial survey for basic systems such as EMI/EMC.”

Satish Viswanathan, AVP, products, Tulip Data Center Services says, “We have meticulously designed this center on 3 major criteria-power, cooling, and space.”

 

Power

The center is provisioned with up to 66 kV line power available from 2 sub-stations on a high-tension format (7.5 MW). As of today, the center has received government sanction for receiving up to 40 MW power, and it has up to 7.5 MW power available, while the usage is around 4 MW only. Considering the energy benchmarks, the PUE comes up to 1.9 if utilized to full capacity, and at present it is in the range of 1.4-1.6.

 

According to Viswanathan, following the principles of modular design, they have gotten their own sub-stations and transformers on each floor plate of the data center that transfers (11 kVA power in each transformer) the high-tension power directly, and helps reducing the transmission loss. They have used new dry transformers to avoid any external damage to the equipment.

They have maintained N+N redundancy for every component required in the data center including the transformers. These are also equipped with Novec 1230 extinguishers, fire-retardant paints on the doors that can stand glazing fire for up to 2-3 hours, and 200-400 kVA UPS and back-up batteries that can take the load for 20 minutes in full load.

 

Cooling

Claiming to be first of its kind, TDC has a separate PAHU room and separate IT and electrical room, no non-IT entry in the server room. TDC follows closed containment cooling system, where the entire rack space is in a glass enclosure that maintains the temperature of the confinement rather cooling up the entire room.

The chiller plant is situated on the roof of the edifice. The water follows a closed circular system, wherein the same water is used over and over again.

 

Space

While the entire TDC is spread over 900,000 sq ft (as big as 12 Taj Mahal’s kept together), the server built-up area would be around 45,000 sq ft. Every floor plate has a capacity of around 20,000 sq ft or up to 600 racks in 6 kV per rack.

Every enclosure has a surveillance camera and rows of pipelines hanging with the ceiling. These are very early separate ad (VESDA) pipelines to detect any change in chemical composition in air and alert beforehand, and second line of pipelines carrying inert gas that can automatically decompose into natural gas.

The building has 4 towers, with warehouse and staging area available on the ground floor and a NOC in the first floor. It is a 80-seater space with availability for IT and non-IT control and L1/L2 support. It also offers a telco room for telecom service providers, a 1,500 seated space available as DR center for the clients.

Apart from this, TDC has looked at various other facilities for client convenience, eg, around 50 meeting rooms including a videoconferencing room, a town hall area of 300 capacities.

Leave a Reply

Your email address will not be published. Required fields are marked *