BY USE CASE
Secure Distributed Work
Boost Productivity
BY INDUSTRY
Kubernetes is a container orchestration tool—an open-source, extensible platform for deploying, scaling, and managing the complete life cycle of containerized applications across a cluster of machines. Kubernetes is Greek for helmsman, and true to its name, it allows you to coordinate a fleet of containerized applications anywhere you want to run them: on premises, in the cloud, or both.
Explore additional Kubernetes topics:
Created by Google based on its experience running containers in production and later contributed to open source, Kubernetes has become the standard for managing containers in public cloud, hybrid cloud and multi-cloud environments. Kubernetes is maintained by the Cloud Native Computing Foundation (CNCF) under the auspices of the Linux Foundation and supported by thousands of contributors, including top corporations such as Red Hat and IBM as well as certified partners—experienced service providers, training providers, certified distributors, hosted platforms, and installers.
Kubernetes is used to manage microservices architectures and can be deployed in most cloud environments. Major public cloud platforms including Google, AWS, and Microsoft Azure all offer Kubernetes support, enabling IT to move applications to the cloud more easily. Kubernetes offers significant advantages to development teams, with capabilities including service discovery and load balancing, automated deployment and rollback, and auto-scaling based on traffic and server load.
As the containerization ecosystem matures and Kubernetes becomes the default operating system for the cloud, cloud computing platform-as-a-service (PaaS) offerings such as Red Hat’s OpenShift, built around Docker containers, enable developers to share source code and extensions. Developers also contribute code to the open source Kubernetes project on GitHub.
When it comes to abstracting infrastructure away from traditional servers, containerization helps DevOps develop cloud-native applications faster, keep long-running services always-on, and efficiently manage new builds.
E-BOOK
See how you can bridge the gap between traditional and DevOps app delivery.
A Kubernetes cluster is the physical platform that underpins Kubernetes architecture. It brings together individual physical and virtual machines using a shared network and can be envisioned as a series of layers, each of which abstracts the layer below. If you use Kubernetes, you run a cluster, the building blocks of which are the control plane, nodes, and pods.
The control plane runs on a server, or, for purposes of fault tolerance and high availability, across a group of servers. Also known as the master node, the control plane runs the Kubernetes API and manages the worker nodes and pods in the cluster. It governs how Kubernetes interacts with your applications and is responsible for maintaining the desired state of the cluster, such as which applications are running and which container images they use. The control plane has four major components:
Worker nodes perform tasks requested by the control plane and include the following components that run on every node to maintain running pods:
Pods are groups of one or more containers and the smallest objects in Kubernetes architecture. They share the same compute resources and the same network. Each pod represents a single instance of an application in Kubernetes, and each is assigned a unique IP address, enabling applications to use ports. Pods are created and destroyed on nodes as needed to correspond with the specified desired state.
In Kubernetes, a service is a component that groups functionally similar pods and effectively load balances across them. The service maintains a stable IP address and a single DNS name for a set of pods, so that as they are created and destroyed, the other pods can connect using the same IP address. According to the Kubernetes documentation, the pods that constitute the back-end of an application may change, but the front-end shouldn’t have to track it.
Networking is central to running the distributed systems of nodes and pods that make up Kubernetes clusters. The core of Kubernetes networking is that every pod has a unique IP that is shared by all the containers in the pod, and is routable from all other pods regardless of what node they are on. Specially designated sandbox containers reserve a network namespace that is shared by all containers in a pod so that, when a container is destroyed, a pod IP doesn’t change. Having a single IP per pod enables communication among every pod in the cluster and ensures that two applications will not try to use the same ports. The IP-per-pod model is similar to the virtual machine model, enabling easier porting of applications from VMs to containers.
Kubernetes relies on each pod having its own IP address to support basic load balancing of east-west traffic between microservices pods. A Kubernetes feature called kube-proxy, in its default mode iptables, can act as a basic load balancer by applying rule-based IP management, using either random (least connection) or round-robin selection to distribute network traffic among pods on an IP list.
Because it lacks advanced features, including Layer 7 load balancing and observability, kube-proxy does not provide the genuine load balancing of Ingress, an API object that enables you to set up traffic routing rules for managing external access to the Kubernetes cluster. Ingress is just the first step, however. Though it specifies the traffic rules and the destination, Ingress requires an additional component, an ingress controller, to actually grant the access to external services.
Kubernetes ingress controllers manage inbound requests and provide routing specifications that align with specific technology. A number of open-source ingress controllers are available, and all of the major cloud providers maintain ingress controllers that are compatible with their load balancers and integrate natively with other cloud services. Common use cases run multiple ingress controllers within a Kubernetes cluster, where they can be selected and deployed to address each request.
For most companies that are accelerating their journey to microservices, Kubernetes is the platform of choice, enabling faster deployments, cloud portability and improved scalability and availability. Citrix enables you to choose from the broadest selection of Kubernetes and open-source platforms and tools with a flexible app delivery platform that lets you move to cloud-native at your own pace. With Citrix ADC, you can: