Mirantis OpenStack is a popular OpenStack distribution and Citrix has released an official XenServer Fuel plug-in based on Mirantis OpenStack 8.0, which integrates with Neutron for the first time.

You can download our plug-in from the Mirantis fuel plug-in page.

In this post, I will focus on network part, since the neutron project has been introduced in the XenServer Fuel plug-in for the first time. For an introduction to Mirantis OpenStack and the XenServer plug-in, please refer to a previous blog post.

1. Neutron brief

Basically Neutron is an OpenStack project which provides “networking as a service” (NaaS). It’s a stand-alone service alongside other services such as Nova (compute), Glance (image), Cinder (storage). It provides high-level abstraction of network resources, such as network, subnet, port, router, etc. Further it enforces SDN, delegating its implementation and functionalities to the plugin, which is not possible in nova-network.

The picture from the OpenStack official website describes typical deployment with Neutron.

  • Controller node: Provide management functions, such as API servers and scheduling services for Nova, Neutron, Glance and Cinder. It’s the central part where most standard OpenStack services and tools run.
  • Network node: Provide network services, runs networking plug-in, layer 2 agent, and several layer 3 agents. Handles external connectivity for virtual machines.
    • Layer 2 services include provisioning of virtual networks and tunnels.
    • Layer 3 services include routing, NAT, and DHCP.
  • Compute node: Provide computing service, it manages the hypervisors and virtual machines.

Note: With Mirantis OpenStack, network node and controller node combine to controller node.

687474703a2f2f646f63732e6f70656e737461636b2e6f72672f73656375726974792d67756964652f5f696d616765732f3161612d6e6574776f726b2d646f6d61696e732d6469616772616d2e706e67.png

2. How neutron works under XenServer

Back to XenServer and Neutron, let’s start from those networks.

2.1 Logical networks

With Mirantis OpenStack, there are several networks involved.

[[code]]czoyMTk6XCJPcGVuU3RhY2sgUHVibGljIG5ldHdvcmsgKGJyLWV4KQ0KT3BlblN0YWNrIFByaXZhdGUgbmV0d29yayAoYnItcHJ2KQ17WyYqJl19CkludGVybmFsIG5ldHdvcmsNCiAgICBPcGVuU3RhY2sgTWFuYWdlbWVudCBuZXR3b3JrIChici1tZ210KQ0KICAgIE9wZW5TdGFja3tbJiomXX0gU3RvcmFnZSBuZXR3b3JrIChici1zdG9yYWdlKQ0KICAgIEZ1ZWwgQWRtaW4oUFhFKSBuZXR3b3JrIChici1mdy1hZG1pbikNClwiO3tbJiomXX0=[[/code]]
  • OpenStack Public network (br-ex):

This network should be represented as tagged or untagged isolated L2 network segment. Servers for external API access and providing VMs with connectivity to/from networking outside the cloud. Floating IPs are implemented with L3 agent + NAT rules on Controller nodes

  • Private network (br-prv):

This is for traffics from/to tenant VMs. Under XenServer, we use OpenvSwitch VLAN (802.1q). OpenStack tenant can define their own L2 private network allowing IP overlap.

  • Internal network:
    • OpenStack Management network (br-mgmt): This is targeted for OpenStack management, it’s used to access OpenStack services, can be tagged or untagged VLAN network.
    • OpenStack Storage network (br-storage): This is used to provide storage services such as replication traffic from Ceph, can tagged or untagged VLAN network.
    • Fuel Admin(PXE) network (br-fw-admin): This is used for creating and booting new nodes. All controller and compute nodes will boot from this PXE network and will get its IP address via Fuel’s internal dhcp server.

mos-fuel-xs-networks

2.2 Traffic flow

In this section, we will explain how traffic goes from VM to external network and traffic between VMs. Also explain the OVS rules supporting these behavior.

2.2.1 Traffic from VM to external network

The major difference when using XenServer as hypervisor under OpenStack is that it has the privileged domain, dom0. When booting a VM, the VM’s NIC (virtual NIC) is acutally the frontend, dom0 manages its backend known as VIF, so regarding the VM’s NIC and traffic, dom0 will be involved of course. As you can see from the below picture, the neutron-ovs-agent runs in comput node (the unprivileged domain, domU), but the ovs it controls actually resides in dom0.

neutron-vlan-v2.png

Let’s assume VM1 with fixed IP: 192.168.30.4, floating IP: 10.71.17.81, when VM1 ping www.google.com, how the traffic goes.

  • In compute node:

Step-1. VM1(eth1) sent packet out through port tap

Step-2. Security group rules on Linux bridge qbr handle firewalling and state tracking for the packages

Step-3. VM1’s packages arrived port qvo, internal tag 16 will be added to the packages

[[code]]czoxOTE6XCIgIEJyaWRnZSBici1pbnQNCiAgICBmYWlsX21vZGU6IHNlY3VyZQ0KICAgIFBvcnQgYnItaW50DQogICAgICAgIEludGV7WyYqJl19cmZhY2UgYnItaW50DQogICAgICAgICAgICB0eXBlOiBpbnRlcm5hbA0KICAgIFBvcnQgXCJxdm9mNTYwMmQ4NS0yZVwiDQogICAgICAge1smKiZdfSB0YWc6IDE2DQogICAgICAgIEludGVyZmFjZSBcInF2b2Y1NjAyZDg1LTJlXCINClwiO3tbJiomXX0=[[/code]]

Step-4. VM1’s package arrived port int-br-prv triggering openflow rules, internal tag 16 was changed to physical VLAN 1173.

[[code]]czoxNTU6XCIgICAgY29va2llPTB4MCwgZHVyYXRpb249MTIxMDQuMDI4cywgdGFibGU9MCwgbl9wYWNrZXRzPTI1Nywgbl9ieXRlcz17WyYqJl19Mjc0MDQsIGlkbGVfYWdlPTg4LCBwcmlvcml0eT00LGluX3BvcnQ9NyxkbF92bGFuPTE2IGFjdGlvbnM9bW9kX3ZsYW5fdmlkOjExN3tbJiomXX0zLE5PUk1BTA0KXCI7e1smKiZdfQ==[[/code]]
  • In network node:

Step-5. VM1’s packages went through physical VLAN network to network node bridge br-int via port int-br-prv triggering openflow rules, changing physical VLAN 1173 to internal tag 6.

[[code]]czoxNDE6XCIgIEJyaWRnZSBici1pbnQgICAgICAgIA0KICAgIFBvcnQgaW50LWJyLXBydg0KICAgICAgICBJbnRlcmZhY2UgaW50LWJ7WyYqJl19ci1wcnYNCiAgICAgICAgICAgIHR5cGU6IHBhdGNoDQogICAgICAgICAgICBvcHRpb25zOiB7cGVlcj1waHktYnItcHJ2fQ0KXCI7e1smKiZdfQ==[[/code]]

openflow rules:

[[code]]czoyMzE6XCIgIG92cy1vZmN0bCBkdW1wLWZsb3dzIGJyLWludA0KICBOWFNUX0ZMT1cgcmVwbHkgKHhpZD0weDQpOg0KICAgIGNvb2t7WyYqJl19aWU9MHhiZTZiYTAxZGU4ODA4YmNlLCBkdXJhdGlvbj0xMjU5NC40ODFzLCB0YWJsZT0wLCBuX3BhY2tldHM9MjUzLCBuX2J5dGVzPXtbJiomXX0yOTUxNywgaWRsZV9hZ2U9MTMxLCBwcmlvcml0eT0zLGluX3BvcnQ9MSxkbF92bGFuPTExNzMgYWN0aW9ucz1tb2Rfdmxhbl92aWQ6e1smKiZdfTYsTk9STUFMDQpcIjt7WyYqJl19[[/code]]

Step-6. VM1’s packages with internal tag 6 went into virtual router qr

[[code]]czoyMzE6XCIgIEJyaWRnZSBici1pbnQNCiAgICBQb3J0IFwidGFwYjk3N2Y3YzMtZTNcIg0KICAgICAgICB0YWc6IDYNCiAgICAgICAgSXtbJiomXX1udGVyZmFjZSBcInRhcGI5NzdmN2MzLWUzXCINCiAgICAgICAgICAgIHR5cGU6IGludGVybmFsDQogICAgUG9ydCBcInFyLTQ3NDJjM2E0e1smKiZdfS1hNVwiDQogICAgICAgIHRhZzogNg0KICAgICAgICBJbnRlcmZhY2UgXCJxci00NzQyYzNhNC1hNVwiDQogICAgICAgICAgICB0eXBlOiB7WyYqJl19aW50ZXJuYWwNClwiO3tbJiomXX0=[[/code]]

ip netns exec qrouter-0f23c70d-5302-422a-8862-f34486b37b5d route

[[code]]czo0ODA6XCIgICAgS2VybmVsIElQIHJvdXRpbmcgdGFibGUNCiAgICBEZXN0aW5hdGlvbiAgICAgR2F0ZXdheSAgICAgICAgIEdlbm17WyYqJl19YXNrICAgICAgICAgRmxhZ3MgTWV0cmljIFJlZiAgICBVc2UgSWZhY2UNCiAgICBkZWZhdWx0ICAgICAgICAgMTAuNzEuMTYuMSAgIHtbJiomXX0gICAwLjAuMC4wICAgICAgICAgVUcgICAgMCAgICAgIDAgICAgICAgIDAgcWctMTI3MGRkZDQtYmINCiAgICAxMC4xMC4wLjAgICAge1smKiZdfSAgICogICAgICAgICAgICAgICAyNTUuMjU1LjI1NS4wICAgVSAgICAgMCAgICAgIDAgICAgICAgIDAgcXItYjc0N2Q3YTYtZWQNCiB7WyYqJl19ICAgMTAuNzEuMTYuMCAgICAgICogICAgICAgICAgICAgICAyNTUuMjU1LjI1NC4wICAgVSAgICAgMCAgICAgIDAgICAgICAgIDAgcXtbJiomXX1nLTEyNzBkZGQ0LWJiDQogICAgMTkyLjE2OC4zMC4wICAgICogICAgICAgICAgICAgICAyNTUuMjU1LjI1NS4wICAgVSAgICAgMCAge1smKiZdfSAgICAwICAgICAgICAwIHFyLTQ3NDJjM2E0LWE1DQpcIjt7WyYqJl19[[/code]]

qr locates in linux network namespace, it’s used for routing within tenant private network. VM1’s packages were with fixed IP 192.168.30.4 at the moment, from the above route table, we can see it’s qr-4742c3a4-a5.

Step-7. VM1’ packages were SNAT and went out via gateway qg within namespace

[[code]]czoxNzc6XCIgICAtQSBuZXV0cm9uLWwzLWFnZW50LVBSRVJPVVRJTkcgLWQgMTAuNzEuMTcuODEvMzIgLWogRE5BVCAtLXRvLWRlc3R7WyYqJl19aW5hdGlvbiAxOTIuMTY4LjMwLjQNCiAgIC1BIG5ldXRyb24tbDMtYWdlbnQtZmxvYXQtc25hdCAtcyAxOTIuMTY4LjMwLjQvMzIgLXtbJiomXX1qIFNOQVQgLS10by1zb3VyY2UgMTAuNzEuMTcuODENClwiO3tbJiomXX0=[[/code]]

ip netns exec qrouter-0f23c70d-5302-422a-8862-f34486b37b5d ifconfig

[[code]]czo4MjY6XCJsbyAgICBMaW5rIGVuY2FwOkxvY2FsIExvb3BiYWNrICANCiAgICAgIGluZXQgYWRkcjoxMjcuMC4wLjEgIE1hc2s6MjV7WyYqJl19NS4wLjAuMA0KICAgICAgaW5ldDYgYWRkcjogOjoxLzEyOCBTY29wZTpIb3N0DQogICAgICBVUCBMT09QQkFDSyBSVU5OSU5HICBNVHtbJiomXX1VOjY1NTM2ICBNZXRyaWM6MQ0KICAgICAgUlggcGFja2V0czowIGVycm9yczowIGRyb3BwZWQ6MCBvdmVycnVuczowIGZyYW1lOjANe1smKiZdfQogICAgICBUWCBwYWNrZXRzOjAgZXJyb3JzOjAgZHJvcHBlZDowIG92ZXJydW5zOjAgY2FycmllcjowDQogICAgICBjb2xsaXNpb257WyYqJl19czowIHR4cXVldWVsZW46MCANCiAgICAgIFJYIGJ5dGVzOjAgKDAuMCBCKSAgVFggYnl0ZXM6MCAoMC4wIEIpDQpxZy0xMjcwZGRkNHtbJiomXX0tYmIgTGluayBlbmNhcDpFdGhlcm5ldCAgSFdhZGRyIGZhOjE2OjNlOjViOjM2OjhjICANCiAgICAgIGluZXQgYWRkcjoxMC43MS4xe1smKiZdfTcuOCAgQmNhc3Q6MTAuNzEuMTcuMjU1ICBNYXNrOjI1NS4yNTUuMjU0LjANCiAgICAgIGluZXQ2IGFkZHI6IGZlODA6OmY4MTY6M2V7WyYqJl19ZmY6ZmU1YjozNjhjLzY0IFNjb3BlOkxpbmsNCiAgICAgIFVQIEJST0FEQ0FTVCBSVU5OSU5HIE1VTFRJQ0FTVCAgTVRVOjE1MDAgIHtbJiomXX1NZXRyaWM6MQ0KICAgICAgUlggcGFja2V0czozMDY0NCBlcnJvcnM6MCBkcm9wcGVkOjAgb3ZlcnJ1bnM6MCBmcmFtZTowDQogICAge1smKiZdfSAgVFggcGFja2V0czoxMjcgZXJyb3JzOjAgZHJvcHBlZDowIG92ZXJydW5zOjAgY2FycmllcjowDQogICAgICBjb2xsaXNpb25zOjB7WyYqJl19IHR4cXVldWVsZW46MCANCiAgICAgIFJYIGJ5dGVzOjIwMTYxMTggKDIuMCBNQikgIFRYIGJ5dGVzOjg5ODIgKDguOSBLQikNClwiO3tbJiomXX0=[[/code]]

Step-8. VM1’s packages finally went out through br-ex, see the physical route

[[code]]czo0Mjg6XCIgICAgMC4wLjAuMCAgICAgICAgIDEwLjcxLjE2LjEgICAgICAwLjAuMC4wICAgICAgICAgVUcgICAgMCAgICAgIDAgICB7WyYqJl19ICAgICAwIGJyLWV4DQogICAgMTAuMjAuMC4wICAgICAgIDAuMC4wLjAgICAgICAgICAyNTUuMjU1LjI1NS4wICAgVSAgICAgMCAgIHtbJiomXX0gICAwICAgICAgICAwIGJyLWZ3LWFkbWluDQogICAgMTAuNzEuMTYuMCAgICAgIDAuMC4wLjAgICAgICAgICAyNTUuMjU1LjI1NC4we1smKiZdfSAgIFUgICAgIDAgICAgICAwICAgICAgICAwIGJyLWV4DQogICAgMTkyLjE2OC4wLjAgICAgIDAuMC4wLjAgICAgICAgICAyNTUuMjV7WyYqJl19NS4yNTUuMCAgIFUgICAgIDAgICAgICAwICAgICAgICAwIGJyLW1nbXQNCiAgICAxOTIuMTY4LjEuMCAgICAgMC4wLjAuMCAgICAgIHtbJiomXX0gICAyNTUuMjU1LjI1NS4wICAgVSAgICAgMCAgICAgIDAgICAgICAgIDAgYnItc3RvcmFnZQ0KXCI7e1smKiZdfQ==[[/code]]

For package back from external network to VM, vice versa.

2.2.2 Traffic between VMs

When talking about traffic between VMs, the actual packages’ routes will differ a lot depending on where the VMs residing and whether the VMs belonging to the same tenant. In our case, my environment use neutron VLAN which has the ability of network isolation, so even VMs belonging to the same tenant, if they are attached to different network, they cannot communicate to each other directly unless you let them connected to the same virtual router.

network-topy-1.PNG

neutron-east-west-pic.png

  • Scenario1: VM1 and VM2 belong to same tenant, locate in the same host, attached to the same tenant network

In this scenario, traffic from VM1 to VM2, only need to go through the integration bridge br-int in Host1’s Dom0.

  • Scenario2: VM1 and VM3 belong to same tenant, located in different hosts, attached to the same tenant network

In this scenario, traffic from VM1 to VM3 will go through from Host1(Dom0) via physical VLAN network to Host2(Dom0), no network node involved

  • Scenario3: VM1 and VM4 belong to the same tenant, attached to different tenant network
    • If the two networks not attached to the same virtual router, VM1 and VM4 cannot connect to each other
    • If the two networks attached to the same virtual router, VM1 and VM can connect to each other via network node L3 service
  • Scenraio4: VMs belong to different tenants

In this scenario, VMs cannot communicate with each other via fixed IP, they can only communicate with each other by floating IP.

3. Future

Currently the Neutron integration with XenServer requires allocation of VLANs specifically for the Neutron networks. Neutron can work with tunnels to remove the need for this VLAN allocation, and as XenServer has a recent version of OVS in dom0, supporting VxLAN or using GRE tunnels should be possible.

We’ll also be improving the Neutron integration to use the native OVS python libraries rather than ovs-vsctl commands, which should give a major performance boost to the control plane.

Watch out for more updates soon!

BLOG BANNER