Containerization is defined as a form of operating system virtualization, through which applications are run in isolated user spaces called containers, all using the same shared operating system (OS). A container is essentially a fully packaged and portable computing environment:
- Everything an application needs to run – its binaries, libraries, configuration files and dependencies – is encapsulated and isolated in its container.
- The container itself is abstracted away from the host OS, with only limited access to underlying resources – much like a lightweight virtual machine (VM).
- As a result, the containerized application can be run on various types of infrastructure—on bare metal, within VMs, and in the cloud—without needing to refactor it for each environment.
That’s because there’s less overhead during startup and no need to set up a separate guest OS for each application since they all share the same OS kernel. Because of this high efficiency, containerization is commonly used for packaging up the many individual microservices that make up modern apps. Citrix uses containerization with CPX, an application delivery controller (ADC) that supports more scalable, agile and portable application delivery.
Each container is an executable package of software, running on top of a host OS. A host(s) may support many containers (tens, hundreds or even thousands) concurrently, such as in the case of a complex microservices architecture that uses numerous containerized ADCs. This setup works because all containers run minimal, resource-isolated processes that others cannot access.
Think of a containerized application as the top layer of a multi-tier cake:
- At the bottom, there is the hardware of the infrastructure in question, including its CPU(s), disk storage and network interfaces.
- Above that, there is the host OS and its kernel – the latter serves as a bridge between the software of the OS and the hardware of the underlying system.
- The container engine and its minimal guest OS, which are particular to the containerization technology being used, sit atop the host OS.
- At the very top are the binaries and libraries (bins/libs) for each application and the apps themselves, running in their isolated user spaces (containers).
Containerization as we know it evolved from cgroups, a feature for isolating and controlling resource usage (e.g., how much CPU and RAM and how many threads a given process can access) within the Linux kernel. Cgroups became Linux containers (LXC), with more advanced features for namespace isolation of components, such as routing tables and file systems. An LXC container can do things such as:
● Mount a file system.
● Run commands as root.
● Obtain an IP address.
It performs these actions in its own private user space. While it includes the special bins/libs for each application, an LXC container does not package up the OS kernel or any hardware, meaning it is very lightweight and can be run in large numbers even on relatively limited machines.
LXC serves as the basis for Docker, which launched in 2013 and quickly became the most popular container technology – effectively an industry standard, although the specifications set by the Open Container Initiative (OCI) have since become central to containerization. Docker is a contributor to the OCI specs, which specify standards for the image formats and runtimes that container engines use.
Someone booting a container, Docker or otherwise, can expect an identical experience regardless of the computing environment. The same set of containers can be run and scaled whether the user is on a Linux distribution or even Microsoft Windows. This cross-platform compatibility is essential to today’s digital workspaces, in which workers rely on multiple devices, OSes and interfaces to get things done.
The most distinctive feature of containerization is that it happens at the OS level, with all containers sharing one kernel. That is not the case with virtualization via virtual machines (VMs):
- A VM runs on top of a hypervisor, which is specialized hardware, software or firmware for operating VMs on a host machine, like a server or laptop.
- Via the hypervisor, every VM is assigned not only the essential bins/libs, but also a virtualized hardware stack including CPUs, storage and network adapters.
- To run all of that, each VM relies on a full-fledged guest OS. The hypervisor itself may be run from the host’s machine OS or as a bare-metal application.
Like containerization, traditional virtualization allows for full isolation of applications so that they run independently of each other using actual resources from the underlying infrastructure. But the differences are more important:
- There is significant overhead involved, due to all VMs requiring their own guest OSes and virtualized kernels, plus the need for a heavy extra layer (the hypervisor) between them and the host.
- The hypervisor can also introduce additional performance issues, especially when it is running on a host OS, for example on Ubuntu.
- Because of the high overall resource overhead, a host machine that might be able to comfortably run 10 or more containers could struggle to support a single VM.
Still, running multiple VMs from relatively powerful hardware is still a common paradigm in application development and deployment. Digital workspaces commonly feature both virtualization and containerization, toward the common goal of making applications as readily available and scalable as possible to employees.
Containerized apps can be readily delivered to users in a digital workspace. More specifically, containerizing a microservices-based application, a set of Citrix ADCs or a database (among other possibilities) has a broad spectrum of distinctive benefits, ranging from superior agility during software development to easier cost controls.
Compared to VMs, containers are simpler to set up, whether a team is using a UNIX-like OS or Windows. The necessary developer tools are universal and easy to use, allowing for the quick development, packaging and deployment of containerized applications across OSes. DevOps engineers and teams can (and do) leverage containerization technologies to accelerate their workflows.
A container doesn’t require a full guest OS or a hypervisor. That reduced overhead translates into more than just faster boot times, smaller memory footprints and generally better performance, though. It also helps trim costs, since organizations can reduce some of their server and licensing costs, which would have otherwise gone toward supporting a heavier deployment of multiple VMs. In this way, containers enable greater server efficiency and cost-effectiveness.
Containers make the ideal of “write once, run anywhere” a reality. Each container has been abstracted from the host OS and will run the same in any location. As such, it can be written for one host environment and then ported and deployed to another, as long as the new host supports the container technologies and OSes in question. Linux containers account for a big share of all deployed containers and can be ported across different Linux-based OSes whether they’re on-prem or in the cloud. On Windows, Linux containers can be reliably run inside a Linux VM or through Hyper-V isolation. Such compatibility supports digital workspaces, in which numerous clouds, devices and workflows intersect.
If one container fails, others sharing the OS kernel are not affected, thanks to the user space isolation between them. That benefits microservices-based applications, in which potentially many different components support a larger program. Microservices within specific containers can be repaired, redeployed and scaled without causing downtime of the application
Container orchestration via a solution such as Kubernetes platform makes it practical to manage containerized apps and services at scale. Using Kubernetes, it’s possible to automate rollouts and rollbacks, orchestrate storage systems, perform load balancing and restart any failing containers. Kubernetes is compatible with many container engines including Docker and OCI-compliant ones.
A container may support almost any type of application that in previous eras would have been traditionally virtualized or run natively on a machine. At the same time, there are several computing paradigms that are especially well-suited to containerization, including:
- Microservices: A microservices architecture can be efficiently configured as a set of containers operating in tandem and spun-up and decommissioned as needed.
- Databases: Database shards can be containerized and each app given its own dedicated database instead of needing to connect all of them to a monolithic database.
- Web servers: Spinning up a web server within a container requires just a few command line inputs to get started, plus it avoids the need to run the server directly on the host.
- Containers within VMs: Containers may be run within VMs, usually to maximize hardware utilization, talk to specific services in the VM and/or increase security.
- Citrix ADCs: An ADC manages the performance and security of an app. When containerized, it makes L4-L7 services more readily available in DevOps environments.
The microservices that comprise an application may be packaged and deployed in containers and managed on scalable cloud infrastructure. Key benefits of microservice containerization include minimal overhead, independently scaling, and easy management via a container orchestrator such as Kubernetes.
Citrix ADC can help with the transition from monolithic to microservices-based applications. More specifically, it assists admins, developers and site reliability engineers with networking issues such as traffic management and shifting from monolithic to microservices-based architectures.
When microservices are packaged in containers that are deployed at scale, a container orchestration platform is necessary for managing the life cycles of containers.
Kubernetes is the most prominent container orchestration platform. Originally developed by Google, it has seen been open-sourced and is now managed by the Cloud Native Computing Foundation.
- Expose containers by DNS name or IP address.
- Handle load balancing and traffic distribution for containers.
- Automatically mount local and cloud-based storage.
- Allocate specific CPU and RAM resources to containers and then fit them onto nodes.
- Replace or kill problematic containers without jeopardizing application performance and uptime.
- Manage sensitive information like password and tokens without rebuilding containers.
- Change the state of containers and roll back old containers to replace them with new ones.