Snapshot: Virtualization

The Promise of Containers

Containers have become one of the hot buzzword technologies. Although the concept has been around for years, deploying containers in a modern, virtualized IT environment is promising a new level of speed and efficiency in developing and deploying applications and services.

Basically, a container is a way to encapsulate an application along with all the dependencies, libraries, configuration files and other components it needs to run. It also includes an application program interface (API) that describes the various characteristics of the application to whatever operating system it runs under.

Think of it as a cousin to traditional virtualization, where each virtual machine (VM) is a complete disk image that includes a full copy of an operating system, an application, and all of the files it needs to run —along with enough disk space to store them.

How an organization uses the two together depends on its IT environment and its requirements. If it wants to run multiple operating systems within a single physical server, it will choose VMs. If it needs a number of applications running under the same operating system, or multiple versions of the same application, it can use containers.

Simple efficiency is one of the primary advantages of containers. VMs require a lot of space because of what they need to contain. That also takes up a lot of the physical server’s compute resources. A container is smaller because it needs less and runs off the host machine’s operating system.

Alternatively, an organization could use multiple containers with each VM. Depending on the type of container used, it’s possible to pack as many as three times or more applications into a physical server than with traditional VMs alone.

Another advantage is flexibility. It takes just seconds to create and deploy a container, as opposed to the minute or more needed for VMs. Taking containers down is just as fast. Containers are also useful as agencies move more of their applications to the cloud. Containers can run on a variety of different hosts and with different types of cloud hosts. That means more portability of applications across cloud environments and less dependence on specific cloud providers.

The biggest concern with containers thus far is security. Since all applications share the same operating system, for example, any vulnerability in the kernel of that operating system will expose every application to exploit. That poses a particular problem for any organization operating in the public cloud, where multi-tenancy is the norm. VMs, on the other hand, are more self-contained and can be better isolated and secured.

A less obvious concern is scalability. When it comes to a single server application, containers would seem to have a real advantage. It’s easy to quickly spin up multiple containers, and they place less demand on server resources than equivalent VMs. On the other hand, it’s not as clear how they would scale for a more complex enterprise application.

Containers are considered well-suited for micro-services, for example, where a single application is developed as a suite of small services. Each service runs independently, but communicates with each other. With that architecture, it’s easy to adapt applications just by changing or adding to the services. Each service is developed by separate teams, using different languages, databases and so on. Then they’re delivered as a container. For that and other reasons, containers and traditional VMs will most likely co-exist side-by-side for some time.