PAR SCÉNARIO D’UTILISATION
API security is the protection of application programming interfaces (APIs) from cyberattacks. Along with web applications, APIs are the engines of digital transformation — but they are also highly vulnerable to attack. From SQL injections to server misconfigurations, there is no shortage of cybersecurity threats that can leave APIs exposed to harm, resulting in costly data breaches and sharp reductions in productivity. API security features such as API Discovery , API Abuse help mitigate these risks, while working in tandem with other security mechanisms such as bot management and web application firewalls (WAF) to holistically protect all operating environments.
As IT environments become more complex, so does securing all of the APIs that connect the essential components and facilitate client access.
Environments now span multiple clouds and numerous applications and multiple application architectures, incorporate open source platforms such as Kubernetes and serve more remote employees. The result is a new set of challenges in ensuring sufficient API security. In this context, a suitable API security solution can be cloud-delivered or on-premises, with functionality including but not limited to:
To deliver these key API management and protection features, a modern API security platform may harness the power of technologies such as AI and machine learning (ML) to continuously adapt to changing threats. Multiple points of presence may also be implemented to support reliable performance and redundancy for the solution worldwide.
Because they are automated by design, APIs are uniquely vulnerable to automated cyberattacks, such as ones that attempt to replay at scale credentials stolen during data breaches (credential stuffing). Attacks leveraging programmable bots, not to mention DDoS campaigns, are also constant concerns.
The sophistication of these types of threats has only increased in tandem with the complexity of operational and security information environments. Essentially, companies are more reliant than ever upon:
Whereas the traditional approach to web app and API security often revolved around standalone WAFs and anti-DDoS measures installed in individual data centers, a newer strategy is required to better match this multi-cloud, more API-driven reality.
More specifically, it needs to not only enforce the access control, authorization and authentication to keep advanced threats at bay, but also ensure holistic protection in support of a consistent security posture across multi-cloud setups. API security solutions can now deliver this level of comprehensive, layered cybersecurity and more streamlined API management, through a convenient cloud-delivered service with capabilities such as:
API security solution can minimize operational and infrastructural complexity by offering dashboards that make it easy to configure, scale, and maintain robust application and API security. Securing critical API vulnerabilities may be done via a unified self-service portal for all security administration and enforcement — in other words, a single pane of glass for policy control.
With an API security platform, you may screen traffic to or from any connected application, whether it is hosted in a public or private cloud, hosted on-premises, or built on a monolithic or microservice-based architecture. So as your APIs evolve and support additional backend services and newly migrated applications, the API security platform can keep pace and apply the right protections to all of them.
The WAF within an API security architecture is designed to shield apps and APIs from even the most sophisticated threats. Signature scanning helps identify known attacks and API vulnerabilities, while a positive security model can combat zero-day threats by permitting only the services fundamentally required by the environment.
DDoS attacks come in multiple forms, including variants that convincingly mimic the behavior of legitimate requests. API security may incorporate Layer 4-7 DDoS mitigation, to stop both volumetric attacks and more advanced Layer 7 campaigns exploiting API security vulnerabilities. An always-on, high-capacity, global scrubbing network may provide further support for mitigation of DDoS attacks and ensure that only clean traffic is passed back to an organization’s infrastructure.
Their highly automated nature allows malicious bots to scrape information and overload APIs with junk requests. To keep bots in check, API security tools may implement real-time mitigation through signatures and device fingerprinting. Integration with SIEMs and collaboration platforms also allows for real-time dashboards and detailed reporting on bots and other API security threats.
Citrix Web App and API Protection delivers comprehensive, integrated, and multilayered API security. Plus, Citrix ADC and Citrix ADM investments can further strengthen API security through functionality such as API gateways with customizable parameters:
Overall, effective API security requires multiple tools working in concert. Citrix API management solutions can protect your most important assets from harm and ensure your workforce is productive from anywhere.
Kubernetes is used to manage microservices architectures and can be deployed in most cloud environments. Major public cloud platforms, including Google, AWS and Microsoft Azure, all offer Kubernetes support, enabling IT to move applications to the cloud more easily. Kubernetes offers significant advantages to development teams, with capabilities including service discovery and load balancing, automated deployment and rollback, and auto-scaling based on traffic and server load.
Containerized applications are the latest in the evolution of abstracting infrastructure away from traditional servers. Gartner predicts that by 2022, more than 75% of global organizations will be running containerized applications in production.1 That’s because, as organizations have adopted DevOps for more rapid application deployment, they have found that containerization helps them develop cloud-native applications faster, keep long-running services always-on and efficiently manage new builds.
In Kubernetes, a service is a component that groups functionally similar pods and effectively load balances across them. The service maintains a stable IP address and a single DNS name for a set of pods, so that as they are created and destroyed, the other pods can connect using the same IP address. According to the Kubernetes documentation, the pods that constitute the back-end of an application may change, but the front-end shouldn’t have to track it.
Docker containers are an efficient way to distribute packaged applications. While Kubernetes is designed to coordinate and manage Docker containers, it also faces competition from Docker Swarm, a simple container orchestration engine with native clustering capabilities. Other competitors to Kubernetes include Apache Mesos and Jenkins, a continuous integration server tool.
Networking is central to running the distributed systems of nodes and pods that make up Kubernetes clusters. The core of Kubernetes networking is that every pod has a unique IP that is shared by all the containers in the pod, and is routable from all other pods regardless of what node they are on. Specially designated sandbox containers reserve a network namespace that is shared by all containers in a pod so that, when a container is destroyed, a pod IP doesn’t change. Having a single IP per pod enables communication among every pod in the cluster and ensures that two applications will not try to use the same ports. The IP-per-pod model is similar to the virtual machine model, enabling easier porting of applications from VMs to containers.
Kubernetes relies on each pod having its own IP address to support basic load balancing of east-west traffic between microservices pods. A Kubernetes feature called kube-proxy, in its default mode iptables, can act as a basic load balancer by applying rule-based IP management, using either random (least connection) or round-robin selection to distribute network traffic among pods on an IP list.
Because it lacks advanced features, including Layer 7 load balancing and observability, kube-proxy does not provide the genuine load balancing of Ingress, an API object that enables you to set up traffic routing rules for managing external access to the Kubernetes cluster. Ingress is just the first step, however. Though it specifies the traffic rules and the destination, Ingress requires an additional component, an ingress controller, to actually grant the access to external services.
Kubernetes ingress controllers manage inbound requests and provide routing specifications that align with specific technology. A number of open-source ingress controllers are available, and all of the major cloud providers maintain ingress controllers that are compatible with their load balancers and integrate natively with other cloud services. Common use cases run multiple ingress controllers within a Kubernetes cluster, where they can be selected and deployed to address each request.
For most companies that are accelerating their journey to microservices, Kubernetes is the platform of choice, enabling faster deployments, cloud portability and improved scalability and availability. Citrix enables you to choose from the broadest selection of Kubernetes and open source platforms and tools with a flexible app delivery platform that lets you move to cloud-native at your own pace. With Citrix ADC, you can:
Explore the use cases and learn more about Citrix application delivery solutions for microservices and cloud-native applications.