The primary job of a load balancer is to spread client traffic to a set of servers that can handle the traffic. Compared to an architecture with a single server, this adds security,  scalability, resilience and availability.

lb3.png

The load balancer accepts (‘terminates’) the connection from the clients and initiates new connections to the servers (“backends”). The part of the load balancer that accepts connections from clients is called the “lb vserver” in NetScaler terminology. (In HAProxy, this is the “frontend” or “listener”; in Nginx, this is the server). The  backend servers that accepts the load from the load balancer are called “services” in NetScaler (“server” in HAProxy, “upstream” in Nginx).

To create the right-hand-side configuration in NetScaler, you

  1. Create an lb vserver with an IP (“VIP”) on 53.52.51.20 and port 80
  2. Create services for each of the backend servers
  3. Bind the services from step 2 to the lb vserver

To recreate the topology on the right hand side, we’ll use docker-compose with this compose file (you can find this in the ex1 folder of this git repository:  https://github.com/chiradeep/cpxblog/)

$ cat <span class="skimlinks-unlinked">docker-compose.yaml</span>
version: '2'
services:
  web_a:
    image: httpd:alpine
    expose:
      - 80

  web_b:
    image: httpd:alpine
    expose:
      - 80
  web_c:
    image: httpd:alpine
    expose:
      - 80
  cpx:
    image: store/citrix/netscalercpx:11.1-53.11
    links:
      - web_a
      - web_b
      - web_c
    ports:
      - 22
      - 88
    tty: true
    privileged: true
    environment:
      - EULA=yes

The file specifies 3 identical containerized web servers each running apache httpd. The fourth container is the NetScaler CPX with references to the 3 web servers. The ports declaration tells Docker to map some host ports to the container ports 22 and 88. We’ll use port 88 as the frontend/lb vserver listening port (we can’t use 80 since the NetScaler reserves it). Get this topology running:
$ docker-compose up -d
$ docker-compose ps
   Name                  Command               State                                   Ports                                  
-----------------------------------------------------------------------------------------------------------------------------
ex1_cpx_1     /bin/sh -c bash -C '/var/n ...   Up      161/udp, 0.0.0.0:32855-&gt;22/tcp, 443/tcp, 80/tcp, 0.0.0.0:32854-&gt;88/tcp 
ex1_web_a_1   httpd-foreground                 Up      80/tcp                                                                 
ex1_web_b_1   httpd-foreground                 Up      80/tcp                                                                 
ex1_web_c_1   httpd-foreground                 Up      80/tcp    

We can see that Docker has mapped ports 22 and 88 on the CPX to the host, but not the port 80 on the httpd containers (or 80, 443 and 161/udp on the CPX). These ports are however visible between the containers but not to the outside world.
To configure the NetScaler CPX, we need the IP addresses of each of the containers:
$ for c in ex1_cpx_1  ex1_web_a_1 ex1_web_b_1 ex1_web_c_1 
&gt; do 
&gt;   ip=$(docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $c)
&gt;   echo "$c : $ip"
&gt; done
ex1_cpx_1 : 172.21.0.5
ex1_web_a_1 : 172.21.0.2
ex1_web_b_1 : 172.21.0.3
ex1_web_c_1 : 172.21.0.4

Armed with these IPs, we’ll configure the NetScaler thusly:

  1. Create ‘services’ for each httpd containers
  2. Create ‘lb vserver’ using the IP and port for the CPX
  3. Bind the services to the lb vserver

$ docker-compose port ex1_cpx_1 22
0.0.0.0:32855
$ ssh -p 32855 root@127.0.0.1
root@127.0.0.1's password: 
...
root@2d644f279d16:~# <span class="skimlinks-unlinked">cli_script.sh</span> 'add service Web_A 172.21.0.2 HTTP 80'
exec: add service Web_A 172.21.0.2 HTTP 80
Done
root@2d644f279d16:~# <span class="skimlinks-unlinked">cli_script.sh</span> 'add service Web_B 172.21.0.3 HTTP 80'
exec: add service Web_B 172.21.0.3 HTTP 80
Done
root@2d644f279d16:~# <span class="skimlinks-unlinked">cli_script.sh</span> 'add service Web_C 172.21.0.4 HTTP 80'
exec: add service Web_C 172.21.0.4 HTTP 80
Done
root@2d644f279d16:~# <span class="skimlinks-unlinked">cli_script.sh</span> 'add lb vserver Web HTTP 172.21.0.5 88'
exec: add lb vserver Web HTTP 172.21.0.5 88
Done
root@2d644f279d16:~# <span class="skimlinks-unlinked">cli_script.sh</span> 'bind lb vserver Web Web_A'           
exec: bind lb vserver Web Web_A
Done
root@2d644f279d16:~# <span class="skimlinks-unlinked">cli_script.sh</span> 'bind lb vserver Web Web_B'
exec: bind lb vserver Web Web_B
Done
root@2d644f279d16:~# <span class="skimlinks-unlinked">cli_script.sh</span> 'bind lb vserver Web Web_C'
exec: bind lb vserver Web Web_C
Done
root@2d644f279d16:~# <span class="skimlinks-unlinked">cli_script.sh</span> 'show lb vserver Web'  
exec: show lb vserver Web
    Web (172.21.0.5:88) - HTTP  Type: ADDRESS 
    State: UP
    Effective State: UP
    Client Idle Timeout: 180 sec
    Down state flush: ENABLED
    Disable Primary Vserver On Down : DISABLED
    Appflow logging: ENABLED
    Port Rewrite : DISABLED
    No. of Bound Services :  3 (Total)   3 (Active)
    Configured Method: LEASTCONNECTION
    Current Method: Round Robin, Reason: A new service is bound      BackupMethod: ROUNDROBIN
    Mode: IP
    Persistence: NONE
    &lt;...<em><strong>truncated</strong>...</em>&gt;
2) Web_A (172.21.0.2: 80) - HTTP State: UP  Weight: 1
3) Web_B (172.21.0.3: 80) - HTTP State: UP  Weight: 1
4) Web_C (172.21.0.4: 80) - HTTP State: UP  Weight: 1
Done

We can see that the backends are in state UP, but they don’t have any traffic yet :
root@2d644f279d16:~# <span class="skimlinks-unlinked">cli_script.sh</span> 'stat lb vserver Web'      
exec: stat lb vserver Web
Virtual Server Summary
                      vsvrIP  port     Protocol        State   Health  actSvcs 
Web               172.21.0.5    88         HTTP           UP      100        3

           inactSvcs 
Web                0

Virtual Server Statistics
                                          Rate (/s)                Total 
Vserver hits                                       0                    0
Requests                                           0                    0
Responses                                          0                    0
Request bytes                                      0                    0
<strong><em>&lt;....truncated...&gt;
</em></strong>Web_A             172.21.0.2    80         HTTP           UP        0      0/s
Web_B             172.21.0.3    80         HTTP           UP        0      0/s
Web_C             172.21.0.4    80         HTTP           UP        0      0/s
Done

Let’s send some traffic to this topology!
At the Docker host (my Mac laptop in this case):
$ docker-compose port  ex1_cpx_1 88
0.0.0.0:32854
$  wget -q -O - http://localhost:32854/

It works!

We can send more traffic in a loop:

$ i=0; while [ $i -lt 100 ]; do wget -q http://localhost:32854/ -O /dev/null; let i=i+1; done;
$ ssh -p 32855 root@127.0.0.1 "/var/netscaler/bins/<span class="skimlinks-unlinked">cli_script.sh</span>  'stat lb vserver Web'"
root@127.0.0.1's password: 
exec: stat lb vserver Web

Virtual Server Summary
                      vsvrIP  port     Protocol        State   Health  actSvcs 
Web               172.21.0.5    88         HTTP           UP      100        3

           inactSvcs 
Web                0

Virtual Server Statistics
                                          Rate (/s)                Total 
Vserver hits                                       0                  101
Requests                                           0                  101
Responses                                          0                  101
Request bytes                                      0                11716
<em><strong>&lt;...truncated...&gt;
</strong></em>Web_A             172.21.0.2    80         HTTP           UP       34      0/s
Web_B             172.21.0.3    80         HTTP           UP       34      0/s
Web_C             172.21.0.4    80         HTTP           UP       33      0/s
Done


We can see that the 101 requests we sent have been equally split among the 3 backend containers!

networking banner synergy 2