— Dr Werner Vogels Chief Technology Officer, Amazon
As cloud continues to gain widespread acceptance among enterprises in India, cloud service providers are looking at increasing their presence and marketshare in the Indian market. Dr Werner Vogels, Chief Technology Officer, Amazon, the man driving the technology behind Amazon Web Services was in India recently and Dataquest caught up with him for a chat. Excerpts
Even as cloud continues to gather popularity and a lot of companies look at moving workloads to the cloud, the core applications still remain away from the cloud. What is the challenge you see here?
First of all, I don’t see a reluctance to move the core apps to the cloud. There is a wide spread business of moving the core SAP or Oracle applications on top of AWS. So, I am not really seeing a reluctance to move the core apps to the cloud.
But, what I do see sometimes is some architectural challenges in moving some of these apps to the cloud because they were not built for being elastic or highly available. So, lifting and shifting some of these things will give you some financial benefits, but it doesn’t give you the operational excellence benefits. So, while you can migrate them, they don’t become a super scalable cloud application.
Because they aren’t native cloud apps?
Yes. So, you can move them. But, it’s not a matter that these apps can’t run on the cloud, it is if you can get all the benefits that cloud offers you. There needs to be a RoI. Even when we look at Amazon.com, there are a few pieces that still haven’t moved over to AWS purely because they are tied to a particular architecture that we had in mind. Lifting and shifting them to AWS wouldn’t have delivered much RoI.
CIOs are also bothered about the ability or possibility to move their workload from one public cloud to another in a reliable and seamless way. Currently that seems to be a challenge. What is Amazon doing about it?
I haven’t heard any customer asking for this functionality.
SLAs. That’s another issue where the pubic cloud service providers face the flak. As each customer requirement is different, they will also need different tailor made SLAs. How can SLAs be customized for each customer or do you think that that need is exaggerated?
If you look at uptime and reliability, we are architected in one way for all our customers. And we are architected for 100% uptime, that’s our goal. May be the legal document says something like 99.99% or something like that, but many of our customers realize that. Our customers want us to drive down our overall cost of operations and they enjoy the fact that we have lowered our cost 45 times.
Customers also realize that much of this SLA work from the past is just an insurance. They had to pay more basically as an insurance premium. Our customers are very happy to take the basic standard SLAs and if they want to have something more than that then they could talk to an insurance company to achieve that. The additional levels of SLA is purely an insurance business and we are not in the insurance business.
Do you honestly feel that all workloads should ideally be on the cloud? Or do you feel that there are some workloads that one is better off running it on premise?
I think eventually all workloads should be able to run in the cloud. May be in the high performance compute (HPC) world or in the true supercomputer world where applications have been built against very specific architectures, with very specific low latency kind of interconnects, etc, it will take some time before those kinds of apps over to the cloud. However, the HPC in the clouds, I would like to believe, serves something like 70-80% of the existing HPC workloads. I am always interested in looking from a technology perspective, what are still some of the stumbling blocks in moving some of these things over to the cloud. Most of the stumbling blocks that I have seen are the processor dependent kinds of apps. Over time, I don’t know what can’t be moved.
As the competition in the public cloud space heats up—there are many Indian companies like CtrlS, NetMagic, etc, entering the space in addition to companies like Rackspace—how does AWS differentiate itself from the rest?
There are three things that differentiate us. First of all, the breadth and depth of our services–we have over 40 services now. It’s not just a matter of having a datacenter, it’s not just about having storage and compute. For instance, our relational database service is extremely important for our customers, especially those that run mission critical apps. They don’t want to run their own databases, they want to make sure that the databases are scalable and managed in our environment.
Secondly, our focus on reducing cost and price for our customers, whether it’s through price reductions or by working with individual customers in reducing their bill (by giving them insights into their usage patterns).
Third, of course, is the relentless pace of innovation. Last year, we delivered 280 new and major features and services to our customers and this year at the end of August we were already at 280. We do this is as our customers are part of our continuous feedback cycle.
Do you fear that some public cloud services, like IaaS, PaaS, etc, can get commoditized in the future?
I think we have always seen this as a ‘high volume and low margin business.’ And for that, some level of commoditization will happen all the time. But, that’s more likely to happen at the lowest building block levels like compute or storage level than it will happen at say the analytics level.
Increasingly, we see that public cloud service providers like AWS are using bespoke, custom built servers, in their datacenters. Why do you think that’s the case?
Actually, it’s more than just servers; it includes networking as well. Why? Because datacenter efficiency is extremely important for us. Driving costs down and driving efficiencies in the datacenters is extremely important because that directly results in cost savings for our customers and helps us in reducing our prices.
Apart from scale and efficiency reducing cost and prices, a very important part is innovation with respect to the datacenter. And most of those innovations are all about driving more efficiencies at the datacenter. There are wide variety of tools available there, whether it’s about power and cooling or about server design or network design or layout, all of these things together is the bigger picture. So, we need to do a lot of innovation at the datacenter design level and we want to have total control.
Why is it that you have to innovate at the datacenter level? Do you think companies like HP or Dell or Cisco, which are perhaps supposed to innovate in that space, have abdicated their responsibility towards customers like you?
I don’t know and I can’t speak for them. But a general statement one can make is that while the cost of storage and cost of compute has gone down tremendously over time, the cost per network post has not gone down. So, that means that at our scale there is a significant surcharge for our customers if we would not be innovating.