Why Serverless

4 min read Indus Khaitan

Introduction 

With Serverless computing, we have come a full circle after fifty years, with public cloud being offered as a utility– consumed and paid by a granular utilization metric such as an hour or minute. In the 1960s, time-sharing was the original gangsta of utility computing enabling multiple users to access a single expensive computer, typically a mainframe. The use of time-sharing rapidly declined as personal computers became inexpensive and users could run the same tasks locally.

In the late 1990s, the Internet brought back the popularity of time-sharing in its unique way with web applications where a single application could be used by many end-users at the same time, thus “sharing” the computing power. Today’s web applications have evolved with sophisticated architectures like Serverless, enabling the division of workloads to run applications at scale across a farm of bare-metal servers. This post focuses on how we got here and why Serverless is the best approach to unleash full developer productivity. 

Getting off the ground with single-tenancy

The early web applications were in fact desktops masquerading as webservers, enabling HTTP end-points to serve output from an “executable” file. A bare-metal server running a single application deployed in a datacenter was an acceptable production architecture. But most of the time this bare-metal was sitting idle when not serving its users. This led to subsequent architectures partitioning the bare-metal and reusing the unused capacity for other applications. The two partitioning schemes that emerged depending on whether:

  1. The Operating System (OS) was partitioned so that multiple copies of an application (or other OS) can run on a single OS.
  2. The bare-metal was partitioned so that the hardware could run multiple Operating Systems that could then run multiple applications. 

Multi-tenancy by virtualization

The cost of scaling up a single-tenant application is exponential and the capacity is left unutilized once the demand fades. Virtualization enabled this excess capacity to be shared by another application without the risk of sharing data whether it was in memory, in disk or transiting through the network. Companies like VMWare commercialized the virtualization technology, and its open-source cousin Xen became the foundation for Amazon’s public cloud (AWS). 

Though single tenancy recovered this unused capacity of bare-metal, the complexity of having to buy, and maintain servers did not decline.

Typical servers in business and enterprise data centers deliver between 5 and 15 percent of their maximum computing output on average over the course of the year. [1]

Jonathan Koomey

The Arrival of Public Cloud and DevOps

While virtualization became a star in data centers of large enterprises; AWS took commodity hardware, added virtualization, a billing system and offered it as a service to be consumed by companies who were willing to host applications and pay by the hour. Over a period of time, AWS added popular server infrastructure components as a service, for storage, networking, database, and identity management. 

Though the problem of buying hardware, managing and patching the Operating System and its components was outsourced to a public cloud vendor like AWS, the role of DevOps emerged as a critical conduit to help orchestrate the deployment of software on the public cloud. The sysadmin of yesterday was reborn as a DevOps of today, without the management of hardware, network, and OS. 

Deploying at scale with DevOps

DevOps was not just a role, but also a collection of architectural patterns that practiced scaling up application hardware and software. DevOps became a repository of N+1 redundancy ideas to scale applications. Developers offloaded the problem of scale-up to DevOps. The role of DevOps became critical to configure these various pieces and ensure that a missing jigsaw piece could not bring the whole application down. 

Turning all deployable code into atomic units

While every other component was available on-demand, what was not further broken down was the bundle of code that developers wrote to host an application. A classic web application is logically divided into 3-tiers: front-end, application server, database. [2]

Image 1: Three-tiers of a web application

The application server consists of several components such as webserver, app server, caching server, queuing systems, and identity management, just to name a few. These components constitute a monolithic architecture that is tightly coupled and deployed in tandem on production as a single application server along with business logic code, third-party vendor libraries, and application runtimes. In highly-scalable web applications, the middle-tier components are highly-managed self-contained units and constitute a ‘moving-part’ by DevOps that requires its own patching, upgrades, and monitoring.

Image 2: Components of the middle-tier

On-demand with Serverless

What if the middle-tier components and the database-tier became an on-demand service? No management, no upgrades, and always available. This was the genesis of Serverless architecture. No webserver, no app server, no traditional middleware, just a few lines of code (ok, I exaggerate) that focuses on the business logic rather than ancillary knick-knacks. A lot of shenanigans happen in configuring the web server, app server, database, and many moving parts in between. 

Serverless makes all of these pieces as “configure, use, and pay.” Serverless is an architecture where generic components (such as a database) run as an on-demand service and the business logic is hosted as a function call. There is no manual intervention required for elastic load balancing, no operating system to manage, no firewall ports to configure, no apache config files to rearrange. The part of serverless where developers write a function is called Functions-as-a-Service (FaaS). AWS Lambda, Azure Functions, Google Functions are popular FaaS platforms. [3][4]

Image 3: Serverless equivalent of a classic three-tier web application

The illustration above is a re-architected three-tier application using Serverless components. Aside, if we happen to draw the swimlanes, the concern owners are now replaced by their Serverless equivalent. To illustrate this point, a hosted message queue replaces a self-managed RabbitMQ for its messaging needs. A self-deployed MySQL could be replaced by a Database that does not be started/stopped or “deployed.”  

Furthermore, FaaS hosts the business logic of the application as a function that is hosted in a stateless virtual machine. These stateless virtual machines are infinitely scalable and fully managed by cloud service vendors such as AWS, Google Compute, Oracle Cloud, and Azure. Other serverless components and FaaS are orchestrated to help build the desired application. 

Conclusion

Serverless brings to life the figment of truly on-demand computing architecture, with the future of enterprise software innovation. With Serverless, IT is going from sysadmin to DevOps to LessOps with zero administration. With Serverless, developers can iterate faster and better with code. Write. Test. Deploy. Learn. Write Again. As the fighter pilot John Boyd (who coined the term OODA loop) [5] once said: “Whoever can handle the quickest rate of change is the one who survives.” Serverless helps you do that better. 

#LessOps #Serverless #Lambda 

References

[1] Right-sizing Data center capital for Cloud Migration, Jonathan Koomey, Stanford Research Fellow

[2] Serverless Architectures, Martin Fowler

[3] Anatomy of a Lambda Function (slides 4-8), James Beswick, Developer Advocate, AWS

[4] Flow of HTTP Traffic in a Serverless architecture, AWS API Gateway documentation

[5] OODA Loop, as discussed in Organic Design of Command and Control by Col. John Boyd

[6] Enterprise Integration Architecture, Mulesoft Training docs

Feature image: Network Engineer at Honeywell Lab (AP Photo/Bela Szandelszky)