While ordering or waiting for your order at McDonald’s, you might have seen how their kitchen works. The Crew members working in the kitchen and on the front counter work with great co-ordination to serve the customer within minutes of his order. However, the success of quick-service restaurants doesn’t depend only on staff co-ordination. It depends on two more critical areas, the scheduling and partial preparation of the food.
In serverless computing, the application code is stored in the Cloud storage (kitchen store) and will be moved to the compute space (kitchen) only when required (order is placed). The code is processed, and the output is delivered (order delivery) and the code is destroyed (in McDonald’s case, it is consumed).
To understand the significance of serverless computing, let’s go back and track how application deployment has evolved over time. Historically, we used to allocate dedicated machines (compute and storage) in our data center to run these applications. In most cases, the upfront CAPEX outage in infrastructure and software licenses used to be very high. The result is an over-cautious technology group with a diminished zeal for innovation. Besides this, the lead time required to build the infrastructure used to be a non-starter for the business users. Also, even if you scientifically calculate the peak infrastructure load requirements for your applications, the sizing would never be perfectly accurate. There would always be undersized or oversized infrastructure. Moreover, organization have rarely factored the cost of infrastructure that is unutilized or underutilized due to the idle or dormant state of an application. Imagine one of your application being rarely used during these COVID days.
“Cloud Computing” has addressed this problem but only partially. With Cloud computing, you have converted the front-loaded CAPEX outflow to a well-distributed amortized OPEX. You can provision the infrastructure within minutes or hours, and the infrastructure can be scaled out for peak load and scale in for non-peak load to reduce the cost. However, this is only a partial solution. Many cloud vendors charge their customers in “prepaid” OPEX mode, i.e pay in advance assuming you are going to utilize the planned infrastructure in its fullest form. Even if your Cloud vendor charges you at the beginning of every quarter, the “pay-per-use” model is not in its pure form.
Cloud computing has another stark limitation. In behavioral economics, there is a term called “sunk cost fallacy”. In layman terms, it is a behavioral trait where we order too much food and then over-eat just to “get our money’s worth”. Let me give you one more example. Do you remember the last time, your family and you went to a theme entertainment park, purchased a cheaper but all-inclusive ticket, and tried your best to utilize the offer within a day? In Cloud computing, we may always fall into a trap of “sunk cost fallacy” and try to utilize the hired infrastructure inefficiently.
The reason we may fall in this trap is because in Cloud computing, the scale-out of infrastructure as per the peak load may have some latency and may require human intervention. Hence in most cases, we don’t compromise on computing and storage requirements and tend to allocate more than required. The problem is more severe when we are launching a new business application where revenue is generated only after a certain period and not instantly from day one.
Now let’s evaluate Serverless computing. It brings all the benefits of Cloud computing and additionally brings the “pay- per-use” model in its purest form. Let us explore how…
Imagine you have written an application which is broken into microservices and in turn into a series of functions. These functions are stored in Cloud storage and are invoked only when the user calls or other microservices call. After invoking the function, the Cloud will provide the requisite amount of “compute”, scale-out if necessary, execute the function and destroy the code from the compute space. You are charged only for the period when the code was executed in “compute” and not for its dead “storage”.
This “Function as a Service” – FaaS method provides scalability at runtime. Its granular level of “pay-per-use” optimizes your cost to a great extent. Additionally, you as a developer need not worry about the infrastructure required to run the code. You can focus completely on building the application logic.
Although serverless computing as a concept existed in some form for several years, AWS bought this to their Cloud customers through AWS Lambda. Today Google Cloud Functions, Microsoft Azure Functions, and IBM’s Open Whisk provide serverless computing to their customers.
Oracle is not far behind and took an interesting approach. Under Chad Arimura’s leadership, they developed a Lambda competitor, the Fn Project. The technology was originally developed by Arimura’s startup Iron.io. His unit is called Serverless organization and it works both on Fn as an open-source product, and as a managed service offering within Oracle’s cloud service. He aims to make serverless a lot less scary for companies running the traditional back-end applications that Oracle specializes in. Oracle can capitalize on the “strategic advantage” in the shape of large enterprise customers, with large application installations and the Java community, with “real, seamless, and elegant integrations with the Serverless architecture.”
Finally, let’s understand the kind of applications or functions that are suitable for serverless architecture. The common use cases are as follows:
Periodic multimedia processing required to transform the media files
- BOTs where scaling can be done automatically as per the peak load
- Data transformation functions in ETL which can be triggered on an event
- Change Data Capture
- Batch processing of scheduled task that needs massive parallel computation and I/O
- IoT data / Big data cleaning and initial processing
- Any part of business application which is triggered only periodically such as a bank’s KYC process
We are in an interesting technology era. Serverless computing makes a lot of sense. However, I would recommend that we must first get on to the “Cloud-native” path, understand microservice architecture and its relevance to our applications and experiment multiple times before we take the “Serverless Computing” plunge.
By Shrikant Navelkar, Director, Clover Infotech