By Manish Gupta, Director, Enterprise Solutions Group, Dell India
The requirements placed on the datacenter of an average organisation have today transformed in scale. They say necessity is the mother of invention. Today, organisations require storage, processing and network capabilities of a much larger scale than before and also have the ability to adapt quickly in order to stay competitive in today’s dynamic business environment. The answer to this lies in hyperscale computing, a term that has been derived from the datacenter efficiency displayed by large scale Web companies such as Google, Facebook and Amazon.
Hyperscale computing or Web-Scale IT, simply put is a distributed computing environment where increase in the volume of data and types or volume of workloads can be accommodated quickly and in a cost-effective manner. Hyperscale environments have hardware and service needs that are distinct from traditional datacentres. Because these environments have a process and focussed methodology of operations, organizations do not want to pay for capabilities they do not need.
For example, because the high availability in hyperscale environments is achieved primarily through software, organizations do not need many of the redundant hardware components or availability tools included with other servers. In addition, these organizations do not typically use same-day parts replacement services; instead, they typically have on-site parts kiosks they can use to service multiple systems during regularly scheduled maintenance windows.
Spike in Interest for Hyperscale computing
Hyperscale computing in the mainstream datacenter has become possible with several vendors investing in building such capabilities and making it accessible to the larger market. With greater awareness about the flexibility and efficiency offered by such technologies, business are paying attention to the possibility of hyperscale computing. They are curious to know how to apply the hyperscale design principles to their more mainstream datacenter needs. So while it’s true there are still sharp contrasts between hyperscale and mainstream enterprise computing architectures, leading enterprises are starting to adopt the guiding principles of hyperscale.
The concept has caught on the awareness and interest of India IT Decision Makers. In 2014, Dell in association with PSB research conducted the Global IT Decision Maker’s Survey, according to which 90 % of IT Decision Makers (ITDMs) in India are interested in making IT work more like the organizations that are primarly hyperscale and have adopted or are developing some of the concepts with 77% seeing operational efficiencies as an important benefit of hyperscale computing. This is much higher than the global average where 75 % of IT Decision Makers (ITDMs) have shown interest in the technology.
Evolution to Hyperscale
The server market has changed in dynamics drastically in the past few years due to the unique challenges, needs and workload requirement of server market segments. On one hand you have the large global internet companies that have evolved a seamless IT system that helps them achieve agility and scalability of another level through new processes, architectures, and practices. This is followed by companies that are massive in scale such as Web Tech and HPC, traditional larger enterprises and small and medium businesses. These unique workload requirements have transpired into four different approaches to the server and resulted in the cross-pollination of best practices.
When it comes to hyperscale, we observed that when working with massive internet companies, IT architecture needs to be common, but server, storage and networking needs to be modified to suit specific verticals such as analytics and web serving. To completely modify architectures for different workloads does not make business sense from a capital and operating expenditure and efficiency point of view.
The Fundamental premise of a Hyperscale environment is to build the environment based on what is most probably required with blocks of compute, storage, IO etc.. which can be mixed and matched based on the need of the application. This allows organizations to maintain a single pool of infrastructure which can be allocated as the needs arise, Let’s take the example of 3 very different workloads a Hadoop environment requires Memory, IO and a lot of disk space but not CPU cycles and Disk IOPS, A Database requires Disk IOPS and a lot of CPU clock cycles, but may not require significant disk space. A typical environment consists of many such applicationos from Web delivery to a CRM putting all of this on one elastic hyperscale environment allows the organization to optimize utilization and at the same time minimize the operating expenses.
The emergence of convergence
New converged architectures are available that provide – for the first time – a common, scalable platform that easily adapts to the ever-shifting business and technology landscape. By leveraging such a building-block concept derived from the large hyperscale IT operations, organizations can better manage, scale and tailor infrastructure to meet business needs as they change over time. While technology to converge server, storage and networking is already here, these new converged architectures will revolutionize how modern enterprises consume and manage IT in 2015.
Leading analyst firms are bullish about the adoption of Hyperscale Computing or Web-Scale IT. Research firm Gartner recently predicted that by 2017, Web-scale IT will be an architectural approach found operating in 50 percent of global enterprises. IDC predicts that in 2015 Asia is likely to focus on Web-Scale Cloud systems. In the datacenter, the most efficient, cost-effective and profitable technologies are branded fit to survive the market demands and dynamics. We believe that hyperscale computing stands among them as a result of its flexibility, efficiency and the business benefits it provide to enterprises.