Une société du Tenth Revolution Group

Virtual machines, containers, and serverless: Compute options on AWS

By Nicola Wright

If you want to build and deploy applications in the cloud, it all starts with compute.

The benefits of outsourcing your compute clout to a major cloud vendor are near-endless, but in short, the agility, speed, and potential cost reductions that cloud computing power can bring allow a business to concentrate on innovating instead of worrying about hardware and infrastructure.

Utilizing cloud-based compute services means that your big processing “brain” is centralized, able to grow infinitely without additional hardware investment, and benefits from all the muscle enterprise-grade servers.

Taking advantage of some variety of virtualization means businesses can develop and run their software in a scalable, highly-available, and secure production environment.

As the world’s biggest cloud vendor, it should come as no surprise that AWS offers a range of compute types for a wide variety of use cases.

Compute types

So with options like virtual server hosting, container management, and serverless computing on the table, how do you choose the right one?

The first step is understanding the difference between the main compute types.

What is a virtual machine?

A virtual machine, or VM for brevity, does what it says on the tin: it imitates a real, physical computer in that it can run apps and programs, but without having using any actual hardware.

A VM is often called an “image,” and behaves like a tangible computer. A VM runs on top of a hypervisor, which itself runs on either a host machine or a “bare-metal” host.

Virtual machine types

  • Type 1: Native or bare-metal. Runs directly on top of the physical host, sandwiched between the hardware and the VMs.
  • Type 2: Hosted hypervisor. Sits on an operating system, creating an additional layer between the hypervisor and the host.

Also known as a virtual machine manager, the hypervisor is the software, firmware, or hardware that enables you to create and run VMs.

It acts as the bridge between the physical hardware, or host, and the VMs themselves (sometimes called guests).

The hypervisor insulates the VM, ensuring it runs in isolation and thus preventing it from affecting the host, or any other VMs running on it.

Because they’re a quarantined environment, VMs are perfect for testing, accessing infected data, creating backups, and trying out app compatibility on different operating systems.

Unlike traditional operating systems, a hypervisor enables multiple “computers” to run on the same physical platform.

A hypervisor also manages all the underlying virtual infrastructure—like the CPU, memory, hard drive, and network interface—of the numerous VMs running on it.

Every VM has its own virtual hardware, like CPUs, memory, hard drives, and network interfaces. You need a significant amount of processing power to run VMs, as you’re essentially operating multiple “computers” at once.

For this reason, most organizations that use VMs host them on the servers of major cloud vendors like AWS.

Using VMs means you can consolidate applications onto a single server, even if they run on different operating systems.

This can cause issues, however, with each VM having its own OS, you can run up significant overheads in terms of resource throughout the development lifecycle.


Practical advice from AWS experts to help you weather the storm

With extensive insights, advice, and best practices from cloud leaders, our brand new white paper is the ultimate guide to optimizing your business with AWS.

Click here to download the white paper now
Complete the form below and receive your PDF white paper in seconds.


What is a container?

Containers are also a type of virtualized compute but are slightly different from VMs in that they’re (wait for it) more self-contained.

A Linux-based, standardized unit that holds everything you need to run a piece of software, a container rolls up the code, configurations, and dependencies that an app needs so that it can be dropped and run anywhere.

Because they’re boxed up into a single package, containers are quick to start up and stop, and easy to move between environments.

You can quickly shift your containerized app between a sandbox to a staging environment to live production, or from a physical machine to the cloud, safe in the knowledge that the program will run reliably and consistently.

Unlike VMs, containers share the host’s operating system with other containers, making them smaller and allowing more workloads to be run on a single OS. Containers also use fewer resources than VMs.

Though both options provide an environment to deploy, manage, and scale applications, containers can do it in a more streamlined way.

We asked Trevor Sullivan, a software professional with more than 15 years of industry experience, to break down the key differences in performance between a VM and a container.

“A container doesn’t have its own kernel, and performs isolation at the process level, instead of at the hardware level,” explains Trevor.


“Because containers don’t have the same dependencies that an entire operating system has, they start up much more quickly than traditional virtual machines, typically measured in milliseconds instead of seconds.”

Trevor Sullivan — AWS Certified Solutions Architect


“Containers provide an additional layer of isolation on top of virtual machines, which means you can scale up applications on fewer VMs, on the same hardware. By utilizing fewer VMs, you have less management and monitoring overhead of your total infrastructure.”

What is serverless computing?

Serverless computing, despite the name, doesn’t mean there are no servers involved at it; it just means you don’t have to worry about them at your end.

Serverless allows you to write and configure an app, and then upload this parcel of code to a cloud server, usually one operated by a large cloud vendor like AWS or Microsoft. The cloud vendor then charges you based on the amount of time each app runs on its servers.

Using serverless platforms means developers don’t have to think about having the right amount of bandwidth or servers to deploy their apps; all of this back-end admin is taken care of by the vendor, with your compute footprint automatically scaled to meet the needs of your app.

Each of these compute types are available on AWS. Here’s a rundown of the virtualization, container, and serverless products that AWS offers on its platform.

Virtualization on AWS

Amazon EC2

Amazon EC2 is the most popular of AWS’s mammoth stable of services. EC2 gives organizations access to a range of virtual computing environments—known as instances—with different compute, memory, storage, and network capabilities, that can be scaled up and down as needed.

With Amazon EC2, you can launch as many or as few of these instances as you require, and retain full control over security, storage, and networking configurations. You can also opt to use preconfigured instance templates called Amazon Machine Images (AMIs).

Amazon EC2 offers instances optimized for a range of use cases, including high-performance computing, real-time big data analytics, machine learning, and data warehousing.

Using EC2 means that you are responsible for allocating capacity, monitoring performance, and designing for fault tolerance and scalability.

VMware Cloud on AWS

VMware Cloud on AWS is an integrated cloud offering, developed in partnership between AWS and VMware.

It’s a highly scalable and secure service that allows organizations to migrate or expand their on-premises VMware vSphere-based environments to the AWS Cloud. Running on Amazon EC2 bare metal infrastructure, it’s an ideal solution for organizations executing vSphere-based workloads internally, who want to migrate these workloads to the cloud or extend their data center capacities.

AWS Firecracker

AWS Firecracker is a newcomer to the field that enables users to create micro VMs. An extremely efficient, open-source compute option, AWS itself uses Firecracker to run its Lambda service. Right now, no other cloud provider has anything similar.


“Taking our Invent and Simplify principle to heart, we asked ourselves what a virtual machine would look like if it was designed for today’s world of containers and functions.”

AWS Chief Evangelist Jeff Barr on AWS Firecracker


“Firecracker is unique from other virtualization solutions, such as VMware or Hyper-V, in the sense that it tightly controls the attack surface of VMs created by it,” explains Trevor.

“There are only four virtualized hardware devices: networking, block storage, a serial console, and a one-button keyboard. By eliminating unnecessary hardware device virtualization, the number of possible attack vectors are drastically reduced, as compared with competing virtualization solutions.”

“Although Firecracker is written in the Rust language, it exposes a simple REST API, making it easy to call into it from any other language. Using this REST API, you can create and destroy “microVMs” with ease, and freely configure the amount of virtual CPUs (vCPU) and memory that is allocated to each microVM.”

Its flexibility is another factor that Trevor believes makes it appealing to users: “Firecracker is not bound to AWS, so you can actually use it to run VMs on your own hardware, such as a development laptop, bare metal server, or customer-owned hardware in a co-location data center.

“The flexibility to run Firecracker inside or outside AWS makes it easy to perform development work wherever your team is most comfortable, and then deploy to a heterogeneous cloud and non-cloud environment.”

More AWS talent than anyone else

Take a look at our database of pre-screened AWS professionals and take the first step toward landing the best administrators, developers, and consultants in the market.

Browse candidates now

Containers on AWS

If you want to take advantage of containers on the AWS cloud, you have several options at your disposal. AWS offers three types of container tools: registry, orchestration, and compute.

  • Registry services allow users to store and manage container images
  • Orchestration services help you administrate where and when your containers run
  • Compute services power the containers

Amazon Elastic Container Registry

Amazon Elastic Container Registry (ECR) is a managed registry for use with Docker container images.

What is Docker? 

Docker is an open-source tool built to make it easy to create, deploy, and run applications using containers. Both open-source and subscription Docker models can be run on AWS, using a range of AWS services.

Amazon ECR provides developers with the infrastructure they need to store and manage container images. It compresses and encrypts container images, ensuring that they’re quick to start up, and offering a reliable foundation that allows containers to run anywhere.

Like many of AWS’s other managed services, it scales automatically, so developers will always have the capacity their containers need without having to deal with expanding the underlying infrastructure manually.

Amazon Elastic Container Service

Amazon Elastic Container Service is a scalable, high-performance container orchestration service.

Like Amazon ECR, it’s built to support and power Docker container images on AWS. With ECS, developers don’t need to install and manage their own orchestration software or use virtual machines to run containerized apps or create microservices.

Using API calls, ECS lets you start, stop, and query Docker-enabled apps, and gives you access to features like IAM roles, security groups, load balancers, Amazon CloudWatch Events, AWS CloudFormation templates, and AWS CloudTrail logs.

Amazon Elastic Kubernetes Service

Amazon Elastic Kubernetes Service is another orchestration service, designed to make it easy to deploy, manage, and scale containerized apps on AWS using Kubernetes.

What is Kubernetes? Kubernetes is a container orchestration software. While Docker is a platform for building and running containerized applications, Kubernetes helps manage “clusters” of distributed container hosts across a range of environments, and automatically scales, updates and removes containers.

You don’t need Kubernetes to use Docker containers, and Kubernetes doesn’t only work with Docker; it can be used to manage all kinds of container technologies. In fact, Docker has its own Kubernetes alternative, Docker Swarm, though Kubernetes tends to be the most popular option for managing containers.

Amazon EKS operates the Kubernetes management infrastructure across multiple AWS availability zones to increase reliability.

It’s certified as being compatible with Kubernetes technology, so existing Kubernetes tools and plugins will run seamlessly on AWS, and can be easily migrated if necessary.

AWS Fargate

AWS Fargate is a compute engine designed to power Amazon ECS. Fargate enables users to run containers without needing to manage servers or clusters.

With AWS Fargate, there’s no provisioning or configuring clusters of VMs to run containers.

Fargate does all the choosing, scaling, and optimizing of servers for you, eliminating the need to administrate the infrastructure that your containers run on.

Serverless computing on AWS

AWS Lambda

AWS Lambda is one of AWS’s core services: a “Function as a Service” platform that offers serverless computing facilities, allowing developers to run their code without having to spin up their own server, container, or VM.

Lambda provides computing resource while removing the need for users to worry about the infrastructure their apps are running on: simply add the code to Lambda and off it goes.

These parcels of code that run on Lambda are known as “functions.” When a function is created, it’s rolled up into a new container. Lambda then calculates how much resource it needs to run, and executes the function on a multi-tenant cluster of AWS servers.

The service is event-driven, meaning it only turns on the meter when your code is actively running, helping save resources. It’s worth noting, however, that Lambda can only be used to run background tasks.

Unlike with Amazon EC2, with Lambda, you don’t have to provision your own instances; Lambda undertakes all the “back end” tasks like capacity provisioning, monitoring performance, and applying security patches for you.

Looking for top AWS talent? We make it easy.

Tell us what you’re looking for and we’ll put together a job spec that’ll attract professionals with the skills and experience you need.

Share your spec

Which one is right for you?

VMs, containers, and serverless platforms are often pitted against each other in terms of which is the best option for rapid development and deployment.

Given the massive number of use cases that these compute options can be applied to, however, there are situations where any one of the above choices could be the right one.

Containers, for example, are best for executing complex applications that require a granular level of control over the environment they’re running in. In these cases, you’ll need to have the resources to provision and manage the application.

Containers can also help you migrate hefty legacy apps to AWS by cutting these apps into containerized “chunks” as microservices and orchestrating them using Amazon EKS.

Serverless computing, on the other hand, is best for background apps that need to be ready to go at a moment’s notice, but don’t run continuously, like IoT apps. Serverless can also help minimize costs or take care of scaling concerns where the use of VMs will eat up too much resource.

There are cases in which one might work better than the other. With AWS Lambda, for instance, users must stick within subjected pre-defined boundaries on size, memory usage, running time, and can only be used with certain programming languages.

Given these restrictions, conducting your app development using serverless alone becomes limiting.

Just as the existence of containers doesn’t make virtual machines completely obsolete, the rise of serverless computing doesn’t negate the use of containers. They can work collectively to help you achieve your compute objectives, and can be leveraged in tandem to offer you the best environment for development and deployment.

So how do businesses make the right choice when it comes to cloud compute options? According to Trevor, much of the decision comes down to how flexible a company wants to be in an increasingly multi-cloud world.

“One of the biggest deciding factors in selecting a cloud-based compute solution, that I have personally observed in the industry, is how comfortable a business is with vendor lock-in,” says Trevor.

“Some businesses wish to maintain the maximum amount of flexibility in their choice of cloud vendors. The concept of multi-cloud has been increasingly common over the past several years.

“Some enterprise deployment tools, such as Kubernetes, can be used across any cloud vendor, or multiple cloud vendors simultaneously, to run production workloads in a highly-available fashion. The same cross-cloud portability is generally applicable when running virtual machines with Windows Server or Linux operating systems.

“On the other hand, some businesses are more comfortable with tying their applications to proprietary APIs from cloud vendors, such as AWS. AWS Lambda, AWS ECS, and AWS Fargate are examples of cloud services that provide value from a faster software development lifecycle.

“However, the caveat is that their proprietary APIs are not portable to competing cloud vendors. In the event a business decides to switch from one cloud vendor to another, they would be forced to re-develop their applications that are using proprietary APIs.”

Whatever compute option you settle on for the task at hand, you can be sure you’ll find the resource you need on AWS.

More AWS talent than anyone else

Take a look at our database of pre-screened AWS professionals and take the first step toward landing the best administrators, developers, and consultants in the market.

Take a look

Informations sur le marché AWS

Recevez les dernières infos AWS directement par email

Nous aimerions vous envoyer des informations concernant le marché AWS par email, téléphone, ou d'autres dispositifs électroniques.