Containerization is defined as a form of operating system virtualization, through which applications are run in isolated user spaces called containers, all using the same shared operating system (OS). A container is essentially a fully packaged and portable computing environment:
That’s because there’s less overhead during startup and no need to set up a separate guest OS for each application since they all share the same OS kernel. Because of this high efficiency, containerization is commonly used for packaging up the many individual microservices that make up modern apps. Citrix uses containerization with CPX, an application delivery controller (ADC) that supports more scalable, agile and portable application delivery.
Each container is an executable package of software, running on top of a host OS. A host(s) may support many containers (tens, hundreds or even thousands) concurrently, such as in the case of a complex microservices architecture that uses numerous containerized ADCs. This setup works because all containers run minimal, resource-isolated processes that others cannot access.
Think of a containerized application as the top layer of a multi-tier cake:
Containerization as we know it evolved from cgroups, a feature for isolating and controlling resource usage (e.g., how much CPU and RAM and how many threads a given process can access) within the Linux kernel. Cgroups became Linux containers (LXC), with more advanced features for namespace isolation of components, such as routing tables and file systems. An LXC container can do things such as:
● Mount a file system.
● Run commands as root.
● Obtain an IP address.
It performs these actions in its own private user space. While it includes the special bins/libs for each application, an LXC container does not package up the OS kernel or any hardware, meaning it is very lightweight and can be run in large numbers even on relatively limited machines.
LXC serves as the basis for Docker, which launched in 2013 and quickly became the most popular container technology – effectively an industry standard, although the specifications set by the Open Container Initiative (OCI) have since become central to containerization. Docker is a contributor to the OCI specs, which specify standards for the image formats and runtimes that container engines use.
Someone booting a container, Docker or otherwise, can expect an identical experience regardless of the computing environment. The same set of containers can be run and scaled whether the user is on a Linux distribution or even Microsoft Windows. This cross-platform compatibility is essential to today’s digital workspaces, in which workers rely on multiple devices, OSes and interfaces to get things done.
The most distinctive feature of containerization is that it happens at the OS level, with all containers sharing one kernel. That is not the case with virtualization via virtual machines (VMs):
Like containerization, traditional virtualization allows for full isolation of applications so that they run independently of each other using actual resources from the underlying infrastructure. But the differences are more important:
Still, running multiple VMs from relatively powerful hardware is still a common paradigm in application development and deployment. Digital workspaces commonly feature both virtualization and containerization, toward the common goal of making applications as readily available and scalable as possible to employees.
Containerized apps can be readily delivered to users in a digital workspace. More specifically, containerizing a microservices-based application, a set of Citrix ADCs or a database (among other possibilities) has a broad spectrum of distinctive benefits, ranging from superior agility during software development to easier cost controls.
Compared to VMs, containers are simpler to set up, whether a team is using a UNIX-like OS or Windows. The necessary developer tools are universal and easy to use, allowing for the quick development, packaging and deployment of containerized applications across OSes. DevOps engineers and teams can (and do) leverage containerization technologies to accelerate their workflows.
A container doesn’t require a full guest OS or a hypervisor. That reduced overhead translates into more than just faster boot times, smaller memory footprints and generally better performance, though. It also helps trim costs, since organizations can reduce some of their server and licensing costs, which would have otherwise gone toward supporting a heavier deployment of multiple VMs. In this way, containers enable greater server efficiency and cost-effectiveness.
Containers make the ideal of “write once, run anywhere” a reality. Each container has been abstracted from the host OS and will run the same in any location. As such, it can be written for one host environment and then ported and deployed to another, as long as the new host supports the container technologies and OSes in question. Linux containers account for a big share of all deployed containers and can be ported across different Linux-based OSes whether they’re on-prem or in the cloud. On Windows, Linux containers can be reliably run inside a Linux VM or through Hyper-V isolation. Such compatibility supports digital workspaces, in which numerous clouds, devices and workflows intersect.
If one container fails, others sharing the OS kernel are not affected, thanks to the user space isolation between them. That benefits microservices-based applications, in which potentially many different components support a larger program. Microservices within specific containers can be repaired, redeployed and scaled without causing downtime of the application
Container orchestration via a solution such as Kubernetes platform makes it practical to manage containerized apps and services at scale. Using Kubernetes, it’s possible to automate rollouts and rollbacks, orchestrate storage systems, perform load balancing and restart any failing containers. Kubernetes is compatible with many container engines including Docker and OCI-compliant ones.
A container may support almost any type of application that in previous eras would have been traditionally virtualized or run natively on a machine. At the same time, there are several computing paradigms that are especially well-suited to containerization, including:
The microservices that comprise an application may be packaged and deployed in containers and managed on scalable cloud infrastructure. Key benefits of microservice containerization include minimal overhead, independently scaling, and easy management via a container orchestrator such as Kubernetes.
Citrix ADC can help with the transition from monolithic to microservices-based applications. More specifically, it assists admins, developers and site reliability engineers with networking issues such as traffic management and shifting from monolithic to microservices-based architectures.
When microservices are packaged in containers that are deployed at scale, a container orchestration platform is necessary for managing the life cycles of containers.
Kubernetes is the most prominent container orchestration platform. Originally developed by Google, it has seen been open-sourced and is now managed by the Cloud Native Computing Foundation.
While it isn’t the only such solution, its feature set is indicative of what a modern container orchestrator should be capable of, as it can: