Migrations to hyper-scalers have changed the mix of workloads: Radhika Ramesh, Capgemini

Radhika Ramesh, EVP – Global Delivery Center Head, CIS, Capgemini, talks to Dataquest, about the journey of data centre industry in India.

How much has the data centre changed over the last ten years? 

Over the last 10 years, we have seen significant changes. Firstly, the competitive landscape we offer data centre services in now has the cloud hyper-scalers (Azure, AWS, Google) as the first choice for many of our customers rather than an exception.

Migrations of workloads from our sites to hyper-scalers have changed the mix of workloads we support. Latency from our sites to the hyper-scalers becomes a key measure and drives the location of new data centre decisions. Over the last 10 years we have moved away from heterogeneous hardware components from various manufacturers and towards the sharing of technologies and the adoption of x86, virtualisation and open source, and finally to converged infrastructure with virtualisation, software-defined networks and storage delivering powerful network, storage and compute in a single rack.

These advances have increased the power density (power per rack) required in data centres, changing the design, layout and cooling solutions we use. A combination of this increased power density and the migration of many services to the cloud has led to a reduction in the amount of data centre space our clients need and that has led to a consolidation in the number of data centres we run globally at Capgemini.

“Latency from our sites to the hyper-scalers becomes a key measure and drives the location of new data centre decisions.”

So what happens as we move ahead?

As we move forward it will become less and less common for end customers to own data centres and even service providers like ourselves are moving towards hyper-scaler data centres with customised co-location services bought from dedicated data centre providers.

One final change that is still emerging is the requirement for the very low-latency content provision and IoT devices with large data volumes or low-latency requirements that require compute solutions to sit at the edge of the network. This will over the next few years drive the creation of edge data centres in some form.

Any specific customer needs or demands that you have observed?

We at Capgemini have seen a change in the demand patterns of customers. Enterprises would like to limit CAPEX investment in data centre identification, acquisition and setup. They are increasingly trying to divest or close their own data centres and depending on system integrators to provide end-to-end service and resource consumption-based models for co-hosting and hybrid strategy. Increasingly, though, customers are also looking to disaggregate data centre hosting from service provision. Moving workloads is complex and expensive; therefore, it is better to not move workloads if you are unhappy with a system integrator’s service.

Most clients have a ‘cloud-first strategy, preferring SaaS (Software as a Service) solutions. This gets clients into a multi-cloud environment.

Where systems have to be hosted outside of the cloud, the focus has shifted to standardisation, virtualisation, hyper-converged technology and software-defined solutions. More and more often these workloads are considered an extension of the cloud rather than the other way around, leading recently to an expansion of services such as AWS Outpost and Azure stack.

This leaves traditional data centres managing a complex mix of shrinking workloads that cannot move to the cloud either because of security, data sovereignty, low latency demand or legacy hardware that is difficult to migrate such as a mainframe. For example, large financial institutions which continue to leverage mainframe along with their move to the public cloud might take some time to transform completely.

“Moving workloads is complex and expensive; therefore, it is better to not move workloads if you are unhappy with a system integrator’s service.”

How much will IoT matter, especially from the perspective of ever-escalating data requirements?

Applications like autonomous vehicles and smart devices used for manufacturing or home appliances will require edge computing – which is processing that occurs at the edge of the network. This problem can be resolved in a number of ways and it is not yet clear which solutions will be widely adopted. It is probable that the network providers will play a pivotal role in the delivery of 5G networks.

We will likely see the emergence of many micro data centres at the edge of the network, which will perform immediate processing. These edge DCs will also play a role in content distribution and hosting for applications such as gaming that require very low latency.

Cloud hyper-scales will complete the picture with the processed data being sent to the cloud for aggregation for further processing and applying AL/ML for inputs into intelligent business operations.

Are our data centres adapting well to VMs, cloud workloads, modularisation, solar-powered servers and localisation?

The modern data centre needs to be very efficient (ideally with a low PUE of less than 1.1), have the ability to host high-density workloads and have low latency to the hyper-scalers (so it needs to be physically close to the providers). This means many older DCs will either be in the wrong place, be limited by the power network or will be constructed with inefficient cooling solutions. These DCs are likely to not have a long-term future.

Moving forward, it is less and less likely that owning a DC makes sense for most organisations. Data centre providers are best positioned to adapt to varied needs right from traditional hosting, co-location services, private cloud and hybrid/multi-cloud setup. Edge data centres will continue to spawn in manufacturing and residential hubs and key transport intersections to allow for processing IoT workloads.

“We will likely see the emergence of many micro data centres at the edge of the network, which will perform immediate processing.”

What is the importance of data sovereignty in this new realm?

Data sovereignty is an area with growing significance. It drives the placement of workload into countries and is a barrier in many cases to moving to hyper-scalers. The same is true for restricted data that has to be dealt with at a high-security level.

By Pratima Harigunani

maildqindia@cybermedia.co.in

Leave a Reply

Your email address will not be published. Required fields are marked *