The first wave of public cloud adoption was focused on an evolutionary “lift and shift” from on-prem datacenters to similar hardware running in the cloud.

The second wave is nothing less than a reimagining of applications. In this article, we look at the structural changes occurring industry-wide to see what the future of software development will look like.

Software and IT can sometimes feel like a bit like gambling—which horse or driver to back? What vendor or technology will lead the pack after a few more laps?

But once or twice in a generation, some paradigm shifts so significantly that the entire industry is left changed.

Public cloud was just such a tectonic shift. Could we be in the middle of another one?

We’ll take a look at what triggered the last major IT migration and see what current trends and structural changes in application architecture reveal about the next wave…

Public cloud wave #1: Lift and Shift

Public cloud adoption has been one of the most transformative IT changes in the last 20 years. At the turn of the century, most IT managers weren’t familiar with the term “cloud”, public clouds didn’t exist as such, and “scaling your infrastructure” meant placing a hardware purchase order months or years in advance.

Fast forward to today, and a typical computer science graduate will never actually see a server in their lifetime.

So thorough has the migration to the cloud been, that an entire generation of software engineers knows as little about how servers get racked and stacked as they do about the VLSI layout of the chips on which their software runs, or the physical layout of their data inside an SSD.

While the impact on physical data center ownership and aggregation has been immense, the business outcomes of this first wave of cloud adoption have also been significant; enterprises enjoy the cost and efficiency advantages of economies of scale, such as amortizing HVAC, physical security, and purchasing power.


Find out more about how businesses are using AWS today, direct from the AWS community.


Click here to download the largest independent AWS market report for free

Complete the form below and receive your PDF report in seconds.

We'd love to send you AWS hiring insights and tips by email, phone or other electronic means. We will never share your data with unauthorized third-party organisations. *

You can withdraw your consent at any time by clicking the unsubscribe link at the bottom of any marketing email.



Perhaps more importantly, they gained access to burstable resources—it’s possible to access petabytes of storage and tens of thousands of servers for a few hours, then return them all to the cloud vendor.

Flash sales in ecommerce, massive Monte Carlo simulations, complex CGI graphic rendering… many of the “burst compute” outcomes we take for granted today were somewhere between difficult and impossible to achieve (at least, in a broad and cost-effective fashion) prior to the existence of the modern public cloud.

As exciting as this transformation has been for both the bottom line and innovation opportunities, it pales in comparison to what’s happening now.

Public cloud wave #2: Managed applications

For all its benefits, the programming model of the public cloud started out as a clone of the private data center model.

This was intentional; the goal of a cloud vendor like AWS was first and foremost to drive cloud adoption, and only secondarily to reimagine how developers should architect their applications.

Early managed services, like Amazon S3, were seen as adjuncts to “normal” disk-based storage. In fact, many of these early services were opportunistic, such as taking an in-house service already in use by Amazon’s retail developers and spinning it out as a cloud service.

But these newfangled services remained a modest part of the offering in the early days of the public cloud—the revenue came largely from virtualized infrastructure in the form of servers and disk drives.

Over time, however, application developers noticed something interesting about the parts of their applications that ran on these “managed” services; they tended to be easier to construct (and thus faster to deliver), often cost less than their DIY counterparts (even when those were built in the public cloud), and—oddly enough for something that offered those benefits—were often less expensive to operate.

Amazon’s early managed services—S3’s object storage, SQS’s queues, and SNS’s messages—had what was then a revolutionary property from a conventional IT perspective; an application-level “pay only for what you use” billing model.

Didn’t send any messages this month? Your SNS bill would be $0. That might sound mundane today, but to an IT director used to paying millions to develop in-house pub/sub solutions and then paying for their infrastructure on an ongoing basis, this was a shocking new way to think about application technologies.

In the early days, these managed services were few and far between, and there was no way to do compute, so they felt more like shortcuts for a few application needs (like messaging or object storage) than a “platform”.

No matter what you were building, you still needed a server to do anything meaningful in the cloud.

The launch of AWS Lambda and the beginning of the second wave

All this changed in November of 2014 when AWS Lambda was launched. For the first time, a major cloud vendor had supplied not just a few helpful application services but a full-fledged programming model constructed entirely from managed pieces.

With a managed compute offering added in, you could suddenly build (at least some) applications without ever directly touching a server or a drive-by coupling S3’s storage, Lambda’s compute functions, and other managed services together.

By offering a simple eventing service alongside its existing API-driven services, AWS also made it possible to build reactive solutions in a managed way for the first time; drop an imagine into S3 or add a row to an Amazon DynamoDB table and suddenly you could trigger a Lambda function without writing any code to do so.

Moreover, it scaled and logged automatically—the equivalent of millions of dollars of engineering investment and tens-to-hundreds of thousands of dollars of infrastructure reduced to a config file.

As promising as this idea was—and it certainly garnered a lot of attention—the early days of serverless offerings from AWS and other cloud vendors were as notable for their shortcomings as for their accomplishments.

Say you wanted to build an automatic image resizer or put an API and some code in front of a DynamoDB table. No problem, all the ingredients were there for you. But if you wanted to build a hybrid app that needed to access resources behind a VPC or VPN or needed to work with MySQL or Postgres database, you weren’t so lucky.

Such connectivity varied from outright impossible at first to merely painful a couple of years later. Tools and application monitoring were similarly bare-bones in those early days, and developers who chose to wait on adopting serverless approaches until the tooling matured had a valid point.

Take a look at our database of pre-screened AWS professionals.

Behind the scenes

When Lambda was first proposed as a service, the idea met a good deal of internal resistance. AWS had never offered anything quite like it, but it had also never built anything quite like it: Lambda was the first time an AWS service was built on top of “public pieces”—EC2, EBS, S3, DynamoDB, SQS, and so forth.

While that might seem like a pretty obvious technique in 2020, in 2012 it was anything but. The initial version of Lambda in production wasn’t designed to maximize profitability or be easy to manage; in fact, keeping it running wasn’t cost-effective in the early months in any dimension.

But by the time a couple of years had elapsed, a clear trend was emerging; Lambda was one of the fastest-growing services and, while still small relative to EC2, it offered AWS an opportunity to change the game.

Infrastructure was a commodity, and competing on infrastructure was a race to the bottom on pricing. With managed services, however, innovation could create both differentiated advantage and an economic premium for those outcomes.

Once Lambda was launched, managed services changed internally at AWS. They were no longer isolated businesses that just happened to be a little easier for devs to use; now, they were an entirely new application platform and business model.

This shift in thinking about the business opportunity was key, and it caused the entire company to double down on managed services and serverless.

What happened next was staggering. Not only did adoption of existing managed services balloon, but AWS began a massive lift to both create more managed services as well as provide more and better connections between them.

AWS Step Functions for choreographing applications, Amazon Serverless Aurora and managed database connectors to make using SQL data easy and scalable within serverless apps, serverless event hubs and application repos, serverless build and deploy toolchains—AWS has relentlessly fleshed out its portfolio, offering a large suite of building blocks for creating almost any type of application out of managed components.

At the same time, existing managed services, such as Lambda, grew in scale, optimized their performance and cost efficiency, and gained a dramatic set of new features.

AWS has pursued a simple and powerful vision with these investments; expand the “addressable market” of serverless to include every vertical, application type, and industry imaginable.

Features like support for “bringing your own language” in Lambda and easy ways to handle failed events have raised the level of abstraction for serverless programming, and the bar for serverless computing in general, over the years.

Behind the scenes, innovations like AWS Firecracker enabled tighter packing of Lambda functions, improving its fundamental economics and making it an ever-more attractive alternative to the server rental business.

Meanwhile, competitors weren’t standing still—Azure, for instance, created interesting new serverless programming abstractions by coupling serverless functions to stored objects, logic apps, and persistent state. The era of serverless innovation had arrived.

The new hybrid cloud: Containers, meet serverless

There are many ways to measure the efficacy of a software’s architecture; its cost, the time it takes to go from requirements to a production solution, the lines of code required to implement it, the number of people required to keep it operational, and so on.

By many of these measures, serverless was succeeding handily compared to more conventional software architectures.

Without infrastructure to configure, scale, or monitor and with cross-service events handled by the cloud vendor instead of application state, applications were generally easier to write, faster to get to market, and less expensive to maintain thereafter.

However, there’s another measure of success for a platform; how successful it is at incorporating leveraging existing solutions and practices.

And here, serverless hit a speedbump; for all its advantages, architecting an application by composing managed services required throwing away many of the tools and techniques that had worked in the past.

Nowhere was this more obvious than when it came to loading and initializing libraries… the infamous “cold start” problem.

To understand the problem, and how it nearly relegated serverless to the backwaters of computing history, it’s helpful to know a little about how a conventional (“serverful”) application works.

To start it up, or to add capacity, or to replace a faulty machine, an additional server has to be allocated from the cloud vendor—“booted”—have the application code installed on it, and then have its application processes start up.

All of this can take a while—as long as five to ten minutes. And because it takes so long, generally speaking, developers don’t worry about exactly how long—a minute more or less to load a bunch of libraries, maybe even some that aren’t strictly necessarily, is no big deal.

Now, contrast this with a serverless function, such as an AWS Lambda invocation. Every attempt to call the code requires the code to be loaded, and if it isn’t cached already, the necessary code and processes need to be loaded and started.

In the initial release of Lambda, a function could only run for a minute… not a happy situation if it takes five minutes to load and start the code!

Even today, a Lambda function can only run for 15 minutes, so consuming five minutes of that time just getting things ready means a huge percentage of its cost would be wasted… to say nothing of the latency impact on the application.

By “going serverless”, developers suddenly had to optimize something that was never important before, plus they lost control over where and how to optimize for latency-sensitive calls.

Vendors reacted to this limitation by creating “hybrid” features in their serverless functions. Both Azure and AWS now make it easy to reserve a fixed amount of capacity in advance, guaranteeing low latency, even for functions that require longer startup times due to library loading or other initialization overhead.

Demanding tasks, such as running a flash e-commerce sale, that require vast resources for short periods of time can now be scheduled in advance, with the same kind of guaranteed latency that a conventional server-based application would offer.

Meanwhile, cloud vendors were concerned with another kind of challenge; interoperability with containers. Alongside serverless, containers (and their currently most adopted form, Kubernetes) have become the vanguard of modern microservice architectures.

Containers brought portability, high fidelity desktop and laptop testing, and streamlined development and deployment tools… capabilities that many serverless developers looked at enviously. Was it possible to get some of the benefits of both approaches?

AWS Fargate and Google Cloud Run set out to do just that. They’re indisputably hybrid solutions—they hide infrastructure details, but unlike a “pure-play” serverless application driven by events or API calls, you have to tell them these two services how to scale at the application level.

But in exchange, you get to use a conventional programming model, leverage existing tools and processes, and keep much of what works in existing server-based deployments while getting closer to a serverless model.

Find your next great AWS job today.

Are we there yet?

Despite the vast amount of progress from every cloud vendor on managed services and serverless options over the last six years, the innovation driving the second wave of cloud applications is anything but over.

New services, better tools, easier and more automated testing and monitoring… there’s still plenty to be done, both from the cloud vendors and from the growing third-party ecosystem.

There are also still tough problems to solve. Despite improvements in connecting to existing systems, solutions for cold starts, and easier ways to bring container images along for the serverless ride, developers and CIOs alike still have one large remaining concern: vendor lock-in.

The very thing that excited AWS leadership worries developers who might one day have to port an application between clouds.

There’s some good news here to be sure; API-driven managed services make it far easier to “mix and match” best of breed solutions than packaged software ever did.

For example, hooking up an S3 event and using it to trigger a Google or IBM Watson ML training set is far easier with managed services than it would be if you had to DIY it all from scratch, even as it crosses vendor lines.

But there’s still more to do, and it’s undoubtedly the case that servers and containers are much further along in standardizing cross-vendor surface area, an idea that has only recently started to gain traction with serverless APIs and event schemas.

The future’s already here… it’s just not evenly distributed yet

As recently as 2013, analysts, CIOs, and pundits alike were all still hotly debating whether the cloud would “work”. Looking back now, that initial ‘lift and shift’ phase of cloud adoption had clearly already reached a tipping point by then, and the rest is, as they say, history.

We’ll look back in a similar way at 2019 as the year serverless—the second wave of the cloud—became inevitable.

With the expansion of the programming model, the appearance of powerful hybrid solutions, and the internal efficiencies of technologies like AWS Firecracker driving profitability, serverless applications have become the new norm.

Still looking for proof? Here’s a fun, but also quite meaningful metric: chart the percentage of AWS re:Invent talks on anything to do with “serverless” or one or more managed services.

From ~2% in 2014 to the majority of presentations today, that trend line—and its implications for how we’ll develop applications for years to come—is anything but cloudy.


About the author
Tim Wagner is a cloud industry veteran and the original spearhead of the serverless movement. As the founder of Vendia, Tim builds cutting-edge infrastructure and services that make applications faster, simpler, and more scalable. The former general manager of AWS Lambda, Tim helped shape the service into the predominant serverless computing technology it is today. Tim then took on the role of VP of Engineering at the digital currency exchange platform Coinbase where he led on blockchain innovation.