Organizations’ adoption of virtualization is on the rise. In its 2020 State of Virtualization Technology report, Spiceworks found that more than half of businesses anticipated they would begin using storage virtualization and application virtualization by 2021. The survey also found that desktop, data and network virtualization technologies would all experience double-digit growth over the next couple of years.
As they deepen their embrace of virtualization, organizations might decide they want to adopt software technologies that run in a virtualized environment. Two of the most common types of these technologies are containers and virtual machines (VMs). This begs an important question: what type of technology is right for them?
This blog post will provide a definition for both containers and VMs. It’ll then highlight the fact that many organizations could actually benefit from using these two technologies together.
According to Kubernetes, containers are software packages that contain entire runtime environments. Administrators can create standalone containers, or they can use a container orchestration service such as Kubernetes to deploy multiple containers at once. With the help of a properly configured Kubernetes environment, administrators can ensure that their containers respond to load and availability by automatically starting or stopping containers. It can also help to recreate containers on another cluster node if their previous cluster node happens to fail.
Some of the benefits of using containers include the following:
- Lightweight: Containers sit on top of a physical server and its host OS, notes NetApp. It also shares the host OS kernel along with binaries and libraries. This makes containers lightweight in the sense that they don’t take up a lot of storage. In fact, organizations can usually tailor containers to contain just the needed services to run an app, thereby saving on crucial system resources and management overhead.
- Portable: Containers don’t rely on the underlying host infrastructure to run an application. These technologies are therefore portable in the sense that organizations can deploy them in different cloud and OS environments.
- Scalability: A small size and portable deployment model makes it easy for organizations to scale up their container environments without spending too much money. This makes containers a useful technology in many organizations’ digital transformations.
Even so, containers do suffer from certain issues. For instance, phoenixNAP noted that containers are not truly isolated from one another; they are isolated only at the process level. This means that security issues involving containers could affect others if a malicious actor compromised the stability of the kernel. Additionally, containers shut down and delete all of their data once they’ve performed their tasks. Organizations can save that data using Data Volumes, but doing so might require a bit of manual configuration and provisioning on the host.
VMware defines a virtual machine (VM) as a “compute resource that uses software instead of a physical computer to run programs and deploy apps.” In contrast to containers, VMs run a complete operating system including the kernel. They can run just about any OS, in fact. With that in mind, organizations can use the Windows Admin Center or Hyper-V Manager to deploy individual VMs. Alternatively, they can use PowerShell or the System Center Virtual Machine Manager to deploy multiple VMs at once.
VMs provide several benefits to organizations. These include the following:
- Isolation: As noted by Microsoft, VMs provide complete isolation from the host OS and other VMs. This makes VMs particularly useful when it’s important to maintain a secure boundary between different applications on the same server or cluster.
- Simple testing: Administrators don’t need to learn much in order to perform testing on a VM. TechRepublic explains that it’s as simple as creating a new VM, installing an OS and getting it to work. Once they have a working VM, they can clone it and/or create snapshots, enabling them to roll back to a working iteration if something goes wrong.
- App piling: Per TechRepublic, administrators can also use VMs to pile on applications into a single machine by installing any services they need. If they require other services, they can then install the new service on the running guest VM. They can thereby create an ever-evolving platform with all the services they need.
But as with containers, VMs have their shortcomings. If each VM requires its own OS installation, administrators need to install updates on each VM. Similarly, installing a new OS system requires updating the VM’s existing OS or creating another one. This can consume a lot of time, especially if they’re running numerous VMs. The presence of entire OS installations also makes VMs much bigger than containers in terms of size and thus slower to boot up. Datamation also notes that administrators could expose their organizations with “VM sprawl,” a phenomenon where they spin up multiple VMs without shutting down the old ones. If organizations have too many VMs, administrators won’t be able to manage them and apply security updates effectively.
The Way Forward
So, what should organizations deploy? Containers or VMs?
Actually, they can do both. What this might look like will vary from organization to organization. As Lunavi explained on its blog:
What that combination looks like depends on each individual deployment. Weigh your applications, your future plans, your cloud providers, and platforms to figure out if you want to run containers inside VMs, which apps are best suited to a container vs. a VM, and how you can maximize your compute resources while maintaining security and avoiding sprawl.
But having a hybrid model isn’t a bad idea. Backblaze makes the point that VMs’ flexibility balances the minimal resource requirements of containers. This grants organizations the maximum amount of functionality.
Interested in learning more about how to make this work? Check out this Docker post here.