Citrix XenServer is a great choice of hypervisor under OpenStack, but there is no native integration between it and RedHat’s RDO packages. This means that setting up an integrated environment using XenServer and RDO is more difficult than it should be. This blog post aims to resolve that, giving a method where CentOS can be set up easily to use XenServer as the hypervisor.

        XenServer: 6.5
        CentOS: 7.0
        OpenStack: Liberty
        Network: Neutron, ML2 plugin, OVS, VLAN

1. Install XenServer

The XenServer integration with OpenStack has some optimizations which means that only EXT3 storage is supported. Make sure when installing your XenServer you select Optimized for XenDesktop when prompted. Use XenCenter to check that the SR type is EXT3 as fixing it after creating the VMs will require deleting the VMs and starting again.

2. Install OpenStack VM

With XenServer, the Nova Compute service must run in a virtual machine on the hypervisor that they will be controlling. As we’re using CentOS 7.0 for this environment, create a VM using the CentOS 7.0 template in XenCenter. If you want to copy+paste the scripts from the rest of the blog, use the name “CentOS_RDO” for this VM. Install the CentOS 7.0 VM but shut it down before installing RDO.

2.1 Create network for OpenStack VM

In single box environment, we need three networks, “Integration network”, “External network”, “VM network”. If you have appropriate networks for the above (e.g. a network that gives you external access) then rename the existing network to have the appropriate name-label. Note that a helper script is provided for some of the later steps in this blog rely on these specific name labels, so if you choose not to use them then please also update the helper script.

You can do this via XenCenter or run the following commands in dom0:

    xe network-create name-label=openstack-int-network
    xe network-create name-label=openstack-ext-network
    xe network-create name-label=openstack-vm-network

2.2 Create virtual network interfaces for OpenStack VM

This step requires the VM to be shut down, as it’s modifying the network setup and the PV tools have not been installed in the guest.

vm_uuid=$(xe vm-list name-label=CentOS_RDO minimal=true)

vm_net_uuid=$(xe network-list name-label=openstack-vm-network minimal=true)
next_device=$(xe vm-param-get uuid=$vm_uuid param-name=allowed-VIF-devices | cut -d';' -f1)
vm_vif_uuid=$(xe vif-create device=$next_device network-uuid=$vm_net_uuid vm-uuid=$vm_uuid)
xe vif-plug uuid=$vm_vif_uuid

ext_net_uuid=$(xe network-list name-label=openstack-ext-network minimal=true)
next_device=$(xe vm-param-get uuid=$vm_uuid param-name=allowed-VIF-devices | cut -d';' -f1)
ext_vif_uuid=$(xe vif-create device=$next_device network-uuid=$ext_net_uuid vm-uuid=$vm_uuid)
xe vif-plug uuid=$ext_vif_uuid

You can also choose use helper script to do these in dom0.

2.3 Configure OpenStackVM/Hypervisor communications

Use HIMN tool (plugin for XenCenter) to add internal management network to OpenStack VMs. This effectively performs the following operations, which could also be performed manually in dom0 or use


Note: If using the commands manually, they should be run when the OpenStack VM is shut down

Set up DHCP on the HIMN network for OpenStack VM, allowing OpenStack VM to access its own hypervisor on the static address Run helper script in domU.


3. Install RDO

3.1 RDO Quickstart gives detailed installation guide, please follow the instruction step by step. This manual only pointed out the steps that must pay attation to during installation.

3.2 Step 3: Run Packstack to install OpenStack

Rather than running packstack immediately, we need to generate an answerfile so that we can tweak the configuration.

Generate answer file:

packstack --gen-answer-file=<ANSWER_FILE>

Install OpenStack services:
packstack --answer-file=<ANSWER_FILE>

These items in <ANSWER_FILE> should be changed as below:

These items in <ANSWER_FILE> should be changed according to your environment:

<physnet1:1000:1050>: physnet1 is physical network name for VLAN provider and tenant networks. 1000:1050 is VLAN tag ranges on each physical network for allocation to tenant networks.

<physnet1:br-eth1,phyext:br-ex>: br-eth1 is OVS bridge for VM network. br-ex is OVS bridge for External network, neutron L3 agent use it for external traffic.

<br-eth1:eth1,br-ex:eth2>: eth1 is OpenStack VM’s NIC which connected to VM network. eth2 is OpenStack VM’s NIC which connected to External network.

4. Configure Nova and Neutron

4.1 Copy Nova and Neutron plugins to XenServer host.


4.2 Edit /etc/nova/nova.conf, switch compute driver to XenServer.

ovs_int_bridge=&lt;integration network bridge&gt;


The integration_bridge above can be found from dom0:

xe network-list name-label=openstack-int-network params=bridge is hypervisor dom0’s address which OpenStack VM can reach via HIMN.

4.3 Install XenAPI Python XML RPC lightweight bindings.

yum install -y python-pip
pip install xenapi

4.4 Configure Neutron

Edit /etc/neutron/rootwrap.conf to support uing XenServer remotely.

# XenAPI configuration is only required by the L2 agent if it is to
# target a XenServer/XCP compute host's dom0.

4.5 Restart Nova and Neutron Services
for svc in api cert conductor compute scheduler; do \
    service openstack-nova-$svc restart; \

service neutron-openvswitch-agent restar

5. Launch another neutron-openvswitch-agent for talking with Dom0

XenServer has a seperation of Dom0 and DomU and all instances’ VIFs are actually managed by Dom0. Their corresponding OVS ports are created in Dom0. Thus, we should manually start the other ovs agent which is in charge of these ports and is talking to Dom0, refer xenserver_neutron picture.

5.1 Create another configuration file

cp /etc/neutron/plugins/ml2/openvswitch_agent.ini /etc/neutron/plugins/ml2/openvswitch_agent.ini.dom0

integration_bridge = xapi3
bridge_mappings = physnet1:xapi2

root_helper = neutron-rootwrap-xen-dom0 /etc/neutron/rootwrap.conf
root_helper_daemon =
minimize_polling = False

firewall_driver = neutron.agent.firewall.NoopFirewallDriver


xapi3 the integration bridge is xapX in the graph. xapi2 is vm network bridge, it’s xapiY in the graph.

xe network-list name-label=openstack-int-network params=bridgexe network-list name-label=openstack-vm-network params=bridge

5.2 Launch neutron-openvswitch-agent

6. Replace cirros guest with one setup to work for XenServer

/usr/bin/python2 /usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-dist.conf --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini.dom0 --config-dir /etc/neutron/conf.d/neutron-openvswitch-agent --log-file /var/log/neutron/openvswitch-agent.log.dom0 &amp;

6. Replace cirros guest with one set up to work for XenServer

    nova image-delete cirros


    glance image-create --name cirros --container-format ovf --disk-format vhd --property vm_mode=xen --visibility public --file cirros-0.3.4-x86_64-disk.vhd.tgz

7. Launching instance and test its connectivity

    source keystonerc_demo

    [root@localhost ~(keystone_demo)]# glance image-list
    | ID                                   | Name   |
    | 5c227c8e-3cfa-4368-963c-6ebc2f846ee1 | cirros |

    [root@localhost ~(keystone_demo)]# neutron net-list
    | id                                   | name    | subnets                                          |
    | 91c0f6ac-36f2-46fc-b075-6213a241fc2b | private | 3a4eebdc-6727-43e3-b5fe-8760d64c00fb |
    | 7ccf5c93-ca20-4962-b8bb-bff655e29788 | public  | 4e023f19-dfdd-4d00-94cc-dbea59b31698             |

    nova boot --flavor m1.tiny --image cirros --nic net-id=91c0f6ac-36f2-46fc-b075-6213a241fc2b demo-instance

    [root@localhost ~(keystone_demo)]# neutron floatingip-create public
    Created a new floatingip:
    | Field               | Value                                |
    | fixed_ip_address    |                                      |
    | floating_ip_address |                         |
    | floating_network_id | 7ccf5c93-ca20-4962-b8bb-bff655e29788 |
    | id                  | 2f0e7c1e-07dc-4c7e-b9a6-64f312e7f693 |
    | port_id             |                                      |
    | router_id           |                                      |
    | status              | DOWN                                 |
    | tenant_id           | 838ec33967ff4f659b808e4a593e7085     |

    nova add-floating-ip demo-instance

After these above steps, we have succefully booted an instance with floating ip, use “nova list” will output the instances
    [root@localhost ~(keystone_demo)]# nova list
    | ID                                   | Name          | Status | Task State | Power State | Networks                       |
    | ac82fcc8-1609-4d34-a4a7-80e5985433f7 | demo-inst1    | ACTIVE | -          | Running     | private=, |
    | f302a03f-3761-48e6-a786-45b324182545 | demo-instance | ACTIVE | -          | Running     | private=, |

Test the connectivity via floating ip, “ping” at the OpenStack VM, will properbly get outputs like:
    [root@localhost ~(keystone_demo)]# ping
    PING ( 56(84) bytes of data.
    64 bytes from icmp_seq=1 ttl=63 time=1.76 ms
    64 bytes from icmp_seq=2 ttl=63 time=0.666 ms
    64 bytes from icmp_seq=3 ttl=63 time=0.284 ms