A book published in 1981, called Nailing Jelly to a Tree, describes software as “nebulous and difficult to get a firm grip on.” That was true in 1981, and it is no less true four decades later. Software, whether it is an application you bought or one that you built yourself, remains hard to deploy, hard to manage, and hard to run.
Docker containers, and the OCI standard for containers and their runtimes, provide a way to get a grip on software. You can use containers to package an application in such a way that its deployment and runtime issues—how to expose it on a network, how to manage its use of storage and memory and I/O, how to control access permissions—are handled outside of the application itself, and in a way that is consistent across all “containerized” apps. You can run your container on any Linux- or Windows-compatible host that has a container runtime installed.
Containers offer many other benefits beyond encapsulation, isolation, portability, and control. Containers are small compared to virtual machines, measurable in megabytes versus gigabytes. They start instantly. They have their own built-in mechanisms for versioning and component reuse. They can be easily shared via directories like the public Docker Hub or a private repository.
Containers are also immutable, which has both security and operational benefits. Any changes to a container must be deployed as an entirely new, differently versioned container.
In this article, we’ll explore how containers make it easier to both build and deploy software. You’ll learn what issues containers address and how they address them, when containers are the right answer to a problem, and when they’re not.
Life before containers
For many years now, enterprise software has typically been deployed either on “bare metal” (i.e., installed on an operating system that has complete control over the underlying hardware) or in a virtual machine (installed on an operating system that shares the underlying hardware with other “guest” operating systems). Naturally, installing on bare metal made the software painfully difficult to move around and update—two constraints that made it hard for IT to respond nimbly to changes in business needs.
Then virtualization came along. Virtualization platforms (also known as hypervisors) enabled multiple virtual machines to share a single physical system, with each virtual machine emulating the behavior of an entire system—complete with its own operating system, storage, and I/O—in an isolated fashion. IT could now respond more effectively to changes in business requirements, because VMs could be cloned, copied, migrated, and spun up or down to meet demand or conserve resources.
Virtual machines also helped cut costs, because more VMs could be consolidated onto fewer physical machines. Legacy systems running older applications could be turned into VMs and physically decommissioned to save even more money.
But virtual machines still have their share of problems. Virtual machines are large (measured in gigabytes), with each one containing a full operating system. Only so many virtualized apps can be consolidated onto a single system. Provisioning a VM still takes a fair amount of time. Finally, the portability of VMs is limited. After a certain point, VMs cannot deliver the kind of speed, agility, and savings that fast-moving businesses require.
Docker and the OCI standard
Containers were conceived as a way to bundle up and organize a clutch of native capabilities in Linux, such as running processes in isolation. But they were difficult to use in concert; if you wanted anything like what we’d recognize today as container-like behavior, you’d have to do a fair amount of manual heavy lifting.
Docker, launched in 2013, made it easy to automate all the things one had to do to containerize apps. Docker’s success as a project, and later as a company monetizing the project, made the Docker approach to containers something of a de facto standard. Over the next few years, container adoption proliferated to the point that competing implementations started to crop up with dueling ideas about how best to implement them.
Eventually, a common standard emerged. The Open Container Initiative (OCI) specification, formalized in 2017, featured contributions from Docker and its competitors. Now, Docker the company is a remnant of its former self, although Docker the product and Docker the open source project live on—so it makes sense for the OCI standard to survive and thrive on its own.
Benefits of containers and containerization
Containers work a little like VMs, but in a far more specific and granular way. They isolate a single application and its dependencies—all of the external software libraries the app requires to run—both from the underlying operating system and from other containers.
All the containerized apps share a single, common operating system (either Linux or Windows), but they are compartmentalized from one another and the system at large. The operating system provides the needed isolation mechanisms to make this compartmentalization happen. Containers wrap those mechanisms in a convenient set of interfaces and metaphors for the developer.
The benefits of containers show up in many places. Below are some of the major advantages of using containers over VMs or bare metal.
Containers use system resources more efficiently
Instances of containerized apps use far less memory than virtual machines, they start up and stop more quickly, and they can be packed far more densely on their host hardware. All of this leads to less spending on IT.
The cost savings will vary depending on what apps are in play and how resource-intensive they are, but containers invariably work out as more efficient than VMs. It’s also possible to save on the cost of software licensing because you need many fewer operating system instances to run the same workloads.
Containers enable faster software delivery cycles
Enterprise software must respond quickly to changing conditions. That means both easy scaling to meet demand and easy updating to add new features as the business requires.
Containers make it easy to put new versions of software, with new business features, into production quickly—and to quickly roll back to a previous version if you need to. They also make it easier to implement strategies like blue/green deployments.
Containers enable application portability
Where you run an enterprise application matters—behind the firewall, for the sake of keeping things close by and secure; or out in a public cloud, for easy public access and high elasticity of resources. Because containers encapsulate everything an application needs to run (and only those things), they allow applications to be shuttled easily between environments. Any host with a container runtime installed—whether that machine is a developer’s laptop or a public cloud instance—can run a container, assuming it has resources enough for that particular containerized application.
Containerization simplifies microservices
Containers make it easier to build software along forward-thinking lines, so you’re not trying to solve tomorrow’s problems with yesterday’s development methods.
One of the software patterns containers simplify is microservices, where applications are constituted from many loosely coupled components. By decomposing traditional, “monolithic” applications into separate services, microservices allow the different parts of a line-of-business app to be scaled, modified, and serviced separately—by separate teams and on separate timelines, if that suits the needs of the business.
Containers aren’t required to implement microservices, but they are perfectly suited to the microservices approach and to agile development processes generally.
Problems containers don’t solve
The first thing to keep in mind about containers is the same piece of advice that applies to any software technology: It isn’t a silver bullet. Containers by themselves can’t solve every problem. Let’s look at a few particular problems containers don’t solve.
Containers won’t fix your security issues
Software in a container can be more secure by default than software run on bare metal, but that’s like saying a house with locked doors is more secure than a house with unlocked doors. It doesn’t say anything about the condition of the neighborhood, the visible presence of valuables tempting to a thief, the routines of the people living there, and so on. Containers can add a layer of security, but only as part of a general program of securing an application in context.
Containers don’t turn applications into microservices
If you containerize an existing app, that can reduce its resource consumption and make it easier to deploy. But it doesn’t automatically change the design of the application, or how it interacts with other applications. Those benefits only come through developer time and effort, not just a mandate to move everything into containers.
If you put an old-school monolithic or SOA-style application in a container, you end up with, well, an old-school application in a container. That doesn’t make it any more useful to your work; if anything, it might make it less useful.
Containers by themselves don’t have the mechanisms to compose microservice-style apps. One needs a higher level of orchestration to accomplish this. Kubernetes is the most common example of such an orchestration system. A more minimal solution, Docker swarm mode, can be used to manage many Docker containers across multiple Docker hosts.
Containers don’t replace virtual machines
One persistent myth of containers is that they make virtual machines obsolete. Many apps that used to run in a VM can be moved into a container, but that doesn’t mean all of them can or should. If you’re in an industry with heavy regulatory requirements, for instance, you might not be able to swap containers for VMs, because VMs provide more isolation than containers.
The case for containers
Enterprise development as a field is notorious for being hidebound and slow to react to change. Enterprise developers chafe against such constraints all the time—the limitations imposed by IT, the demands of the business at large, etc. Containers give developers more of the freedom they crave, while simultaneously providing ways to build business apps that respond quickly to changing business conditions.