Part 2 - How to install OpenStack Havana on Ubuntu - Neutron using the ML2 plugin + Open vSwitch agent configured to use GRE

13 minute read

This post is the second in a multi-part series describing how I installed and configured OpenStack Havana within a test environment. The first post detailed the environment setup and how to install/configure the prerequisites (database and MQ), Keystone, Nova, Glance, and Horizon services. This post will describe how to install and configure Neutron with the ML2 plugin and the Open vSwitch agent. If you haven't read the first post I recommend looking over it quickly, there's some good stuff that's relevant to this post.

Prerequisites


Here's the list of things I did before the OpenStack install:

Install Ubuntu 12.04.2

Personally I have had multiple issues with the 12.04.3 release, I ended up just going back to 12.04.2 for the installation media. Grab the 12.04.2 installation media and install Ubuntu. If you need help with the installation follow these steps.

Install Ubuntu OpenStack package pre-reqs and update Ubuntu

# apt-get update && apt-get -y install python-software-properties && add-apt-repository -y cloud-archive:havana && apt-get update && apt-get -y upgrade dist-upgrade && apt-get -y autoremove && reboot

Once the server reboots log back in via SSH or the console and elevate to superuser.

Configure the local networking

The local networking configuration for the havana-wfe controller was simplistic, the havana-network configuration is more complex.

At a minimum the network node requires two network interfaces (eth0 and eth1) and, if possible, use three network interfaces (eth0, eth1, eth2). Static IP addressing is used in the examples below and is recommended but not required. I'm going to walk you through both scenarios, pick the one that suits you best.

Two-interface scenario

The two-interface scenario uses the first network interface to provide connectivity for OpenStack management and to host VM traffic (specifically the GRE tunnels). The second network interface is used to provide external connectivity to and from remote networks, such as the Internet.

# OpenStack management and VM intra-OpenStack cloud traffic  
auto eth0
iface eth0 inet static
address 192.168.1.111
netmask 255.255.255.0
gateway 192.168.1.1

# VM external access via the L3 agent
auto eth1
iface eth1 inet manual
up ifconfig $IFACE 0.0.0.0 up
up ip link set $IFACE promisc on
down ip link set $IFACE promisc off
down ifconfig $IFACE down

Three-interface scenario

The three-interface scenario uses the first network interface to provide connectivity for OpenStack management, the second hosts VM traffic (specifically the GRE tunnels), and the third provides external connectivity to and from remote networks, such as the Internet.
# OpenStack management  
auto eth0
iface eth0 inet static
address 192.168.1.111
netmask 255.255.255.0
gateway 192.168.1.1

# VM intra-OpenStack cloud traffic
auto eth1
iface eth1 inet static
address 172.16.0.10
netmask 255.255.255.0

# VM external access via the L3 agent
auto eth2
iface eth2 inet manual
up ifconfig $IFACE 0.0.0.0 up
up ip link set $IFACE promisc on
down ip link set $IFACE promisc off
down ifconfig $IFACE down

I chose the three-interface scenario. Here's what my /etc/network/interfaces configuration file looks like:

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet static
address 192.168.1.111
netmask 255.255.255.0
gateway 192.168.1.1

# Hosts the VM GRE tunnels
auto eth1
iface eth1 inet static
address 172.16.0.10
netmask 255.255.255.0

# Provides external cloud connectivity via L3 agent
auto eth2
iface eth2 inet manual
up ifconfig $IFACE 0.0.0.0 up
up ip link set $IFACE promisc on
down ip link set $IFACE promisc off
down ifconfig $IFACE down

There are a couple of options to sync the changes, you can restart networking...

# /etc/init.d/networking restart

...or restart the server.

# reboot

Install the primary and supporting OpenStack Ubuntu packages

# apt-get install -y neutron-server neutron-plugin-openvswitch-agent neutron-dhcp-agent neutron-l3-agent neutron-metadata-agent openvswitch-switch openvswitch-datapath-dkms ntp python-mysqldb

Configure the supporting services


NTP configuration

All the OpenStack infrastructure VMs should point to the same NTP server. Update the /etc/ntp.conffile to point to the IP address or DNS A record of the primary NTP source and save the file. In my case I'm running NTP on the havana-wfe VM which is using the IP address 192.168.1.110. My /etc/ntp.conf file looks like this, note that I commented the NTP pool servers out and changed the fallback server to the IP of havana-wfe.

...
# Use servers from the NTP Pool Project. Approved by Ubuntu Technical Board
# on 2011-02-08 (LP: #104525). See http://www.pool.ntp.org/join.html for
# more information.
#server 0.ubuntu.pool.ntp.org
#server 1.ubuntu.pool.ntp.org
#server 2.ubuntu.pool.ntp.org
#server 3.ubuntu.pool.ntp.org

# Use Ubuntu's ntp server as a fallback.
server 192.168.1.110
...

Save the file and restart the NTP service to sync the changes.

# service ntp restart

Enable IP forwarding and disable packet destination filtering

We need to enable IP forwarding (if you want more information on Linux IP forwarding go here), to do that we need to do two things: update the /etc/sysctl.conffile and so we don't have to reboot, immediately configure the kernel.

# sed -i 's/#net.ipv4.ip_forward=1/net.ipv4.ip_forward=1/' /etc/sysctl.conf && sysctl net.ipv4.ip_forward=1

We also need to disable packet destination filtering so let's update the /etc/sysctl.conf file again and run sysctl to sync the changes immediately.

# sed -i 's/#net.ipv4.conf.default.rp_filter=1/net.ipv4.conf.default.rp_filter=0/' /etc/sysctl.conf && sysctl net.ipv4.conf.default.rp_filter=0 && sed -i 's/#net.ipv4.conf.all.rp_filter=1/net.ipv4.conf.all.rp_filter=0/' /etc/sysctl.conf && sysctl net.ipv4.conf.all.rp_filter=0

Create an OpenStack credentials file

Same as we did in the first post, you will need an OpenStack credentials file. The easiest way is to just copy/SFTP the file from your controller over to this one, in any case here are the directions in case you don't want to. We have already have created the OpenStack admin user, make sure the correct credentials are used.

export OS_AUTH_URL=http://192.168.1.110:5000/v2.0  
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=password

Let's add the values to your profile's environment.

# source creds_file_name

Create the Neutron MySQL database (if you haven't already done so)

If you followed the first post you have already created the database and you can skip down to the Neutron configuration section. If you didn't follow the first post (no biggie), hey, you need to do this. SSH into whatever node is running MySQL and log into MySQL as the root user or another privileged MySQL user.

# mysql -u root -p

Create the database, then create the new user "neutron", set the password, and assign privileges for the new user to the neutron database.

mysql> CREATE DATABASE neutron;  
mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'password';
mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'neutron'@'%' IDENTIFIED BY 'password';
mysql> FLUSH PRIVILEGES;
mysql> QUIT;

Configure Keystone (if you haven't already done so)

If you haven't yet configured Keystone to support Neutron then read on, else head down to the Neutron configuration section. First we need to list the tenant IDs and role IDs, they are needed to create the new neutron user. We care about the service tenant ID and the admin role ID.

# keystone tenant-list && keystone role-list  
+----------------------------------+---------+---------+
| id | name | enabled |
+----------------------------------+---------+---------+
| 62178df3e23040d286a86059216cbfb6 | admin | True |
| 5b4d9fae6e5d4776b8400d6bb1af17a1 | service | True |
+----------------------------------+---------+---------+
+----------------------------------+----------------------+
| id | name |
+----------------------------------+----------------------+
| 3b545aa30a4d4965b76777fb0def3b8d | KeystoneAdmin |
| 31f1ddb2dbad4a1b9560fe5bbce2fe5e | KeystoneServiceAdmin |
| ffe5c15294cb4be99dfd2d41055603f3 | Member |
| 9fe2ff9ee4384b1894a90878d3e92bab | _member_ |
| 24b384d1a7164a8dbb60747b7fb42d68 | admin |
+----------------------------------+----------------------+

Now that we have the service tenant ID we can create the neutron user.

# keystone user-create --name=neutron --pass=password --tenant-id=5b4d9fae6e5d4776b8400d6bb1af17a1 --email=neutron@revolutionlabs.net  
+----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| email | neutron@revolutionlabs.net |
| enabled | True |
| id | e1e5326833684ab3bfefdf5f805cf22a |
| name | neutron |
| tenantId | 5b4d9fae6e5d4776b8400d6bb1af17a1 |
+----------+----------------------------------+

Copy the new neutron user ID from the last step and add the admin role to the neutron user. We also verify that the user was created.

# keystone user-role-add --tenant-id=5b4d9fae6e5d4776b8400d6bb1af17a1 --user-id=e1e5326833684ab3bfefdf5f805cf22a --role-id=24b384d1a7164a8dbb60747b7fb42d68  
# keystone user-role-list --tenant-id=5b4d9fae6e5d4776b8400d6bb1af17a1 --user-id=e1e5326833684ab3bfefdf5f805cf22a
+----------------------------------+----------+----------------------------------+----------------------------------+
| id | name | user_id | tenant_id |
+----------------------------------+----------+----------------------------------+----------------------------------+
| 9fe2ff9ee4384b1894a90878d3e92bab | _member_ | e1e5326833684ab3bfefdf5f805cf22a | 5b4d9fae6e5d4776b8400d6bb1af17a1 |
| 24b384d1a7164a8dbb60747b7fb42d68 | admin | e1e5326833684ab3bfefdf5f805cf22a | 5b4d9fae6e5d4776b8400d6bb1af17a1 |
+----------------------------------+----------+----------------------------------+----------------------------------+

The neutron user has now been created and the admin role has been assigned. Two more things to do, one, create the Neutron service, and two, use the new service ID to assign a set of endpoints to the service. Since this is a lab I'm only using the default region. Replace the IP address with the Keystone Service API IP address or the DNS A record of the same host.

# keystone service-create --name=neutron --type=network  
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | |
| id | 8f9256fb735c4fd8a7584ea0cbbcaa84 |
| name | neutron |
| type | network |
+-------------+----------------------------------+

# keystone endpoint-create --region=RegionOne --service-id=8f9256fb735c4fd8a7584ea0cbbcaa84 --publicurl=http://192.168.1.110:9696 --internalurl=http://192.168.1.110:9696 --adminurl=http://192.168.1.110:9696  
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| adminurl | http://192.168.1.110:9696 |
| id | 858b43c66ed84b409d0149dea3994d71 |
| internalurl | http://192.168.1.110:9696 |
| publicurl | http://192.168.1.110:9696 |
| region | RegionOne |
| service_id | 8f9256fb735c4fd8a7584ea0cbbcaa84 |
+-------------+----------------------------------+

Neutron configuration


Neutron uses several configuration files to run all of the services (neutron-server, neutron-plugin-openvswitch-agent, neutron-dhcp-agent, neutron-l3-agent, neutron-metadata-agent). Here is a breakdown of what files are used by what services, note that we aren't changing the default command-line arguments for the DHCP, L3, or Metadata agents.

neutron-server /etc/neutron/neutron.conf
/etc/neutron/api-paste.ini
/etc/neutron/plugins/ml2/ml2_conf.ini
/etc/default/neutron-server
neutron-plugin-openvswitch-agent /etc/neutron/neutron.conf
/etc/neutron/api-paste.ini
/etc/neutron/plugins/ml2/ml2_conf.ini
/etc/init/neutron-plugin-openvswitch-agent.conf
neutron-dhcp-agent /etc/neutron/neutron.conf
/etc/neutron/api-paste.ini
/etc/neutron/dhcp_agent.ini
neutron-l3-agent /etc/neutron/neutron.conf
/etc/neutron/api-paste.ini
/etc/neutron/l3_agent.ini
neutron-metadata-agent /etc/neutron/neutron.conf
/etc/neutron/api-paste.ini
/etc/neutron/metadata_agent.ini

We will need to make updates to all of the files for the Neutron services to work and to enable the ML2 plugin with the Open vSwitch agent.

neutron.conf

Open up the /etc/neutron/neutron.conf file with your favorite text editor and add or update the following items. Don't save the file until you complete all of them.

Core plugin


Even though the OVSNeutronPlugin is deprecated it is still listed as the default core plugin. We need to modify the core_plugin property value.

...
core_plugin = neutron.plugins.ml2.plugin.Ml2Plugin
...

Advanced service modules


The advanced services (router, load-balancers, FWaaS, or VPNaaS) need to be explictly called out for their modules to be enabled. There are two places where they are defined but I'm only going to use the router (l3-agent) service now. Find the service_plugins property and add the following text.

...
service_plugins = neutron.services.l3_router.l3_router_plugin.L3RouterPlugin
...

At the very end of the file there should be a property called service_provider under the service_providers section, comment it out.

...
[service_providers]
...
# service_provider=LOADBALANCER:Haproxy:neutron.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default

api_paste_config location


Verify that the api_paste_config property is pointing to the correct file location. If the property is uncommented the entire path needs to be there.

...
# Paste configuration file
api_paste_config = /etc/neutron/api-paste.ini
...

Allow overlapping IPs


Locate the allow_overlapping_ips property, uncomment it and change the value to True.
...
allow_overlapping_ips = True
...

RabbitMQ


Find the rabbit_host property, uncomment, then change the value to the IP of your RabbitMQ server.

...
rabbitmq_host = 192.168.1.110
...

Keystone authentication


Next find the [keystone_authtoken] section and update the properties to point to your Keystone server.

...
[keystone_authtoken]
auth_host = 192.168.1.110
auth_port = 35357
auth_protocol = http
admin_tenant_name = service
admin_user = neutron
admin_password = password
signing_dir = $state_path/keystone-signing
...

Disable the database connection string


The ML2 plugin file takes care of the database connection string now so either remove the existing one or comment the connection string out.

...
[database]
...
# connection = sqlite:////var/lib/neutron/neutron.sqlite
...

Save the /etc/neutron/neutron.conf file and move on to the /etc/neutron/api-paste.ini file.

api-paste.ini

Open the /etc/neutron/api-paste.ini file with your favorite text editor. We need to update the [filter:authtoken] section with the Keystone info and then save the file.

...
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
auth_host = 192.168.1.110
admin_tenant_name = service
admin_user = neutron
admin_password = password
...

ML2 configuration

This is where things can get dicey. The ML2 plugin is installed with the neutron-server installation package and installs the files at /usr/share/pyshared/neutron/plugins/ml2. What I wasn't able to find though was a configuration file for ML2 so I used devstack to reverse-engineer it. We will have to create the directory structure and the ml2_conf.ini from scratch, then populate the file with entries.

First create the directory hold the file.

# mkdir /etc/neutron/plugins/ml2

Next create the configuration file using your favorite text editor and paste the following text. Make sure you update the local_ip property to the correct IP address that will host the GRE tunnels and update the connection property to point to your MySQL server. We will also change the owner of the file once it is saved.

# nano /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]
# (ListOpt) List of network type driver entrypoints to be loaded from
# the neutron.ml2.type_drivers namespace.
#
# Example: type_drivers = flat,vlan,gre,vxlan
type_drivers = gre

# (ListOpt) Ordered list of network_types to allocate as tenant
# networks. The default value 'local' is useful for single-box testing
# but provides no connectivity between hosts.
#
# Example: tenant_network_types = vlan,gre,vxlan
tenant_network_types = gre

# (ListOpt) Ordered list of networking mechanism driver entrypoints
# to be loaded from the neutron.ml2.mechanism_drivers namespace.
# Example: mechanism_drivers = arista
# Example: mechanism_drivers = cisco,logger
mechanism_drivers = openvswitch,linuxbridge

[ml2_type_flat]
# (ListOpt) List of physical_network names with which flat networks
# can be created. Use * to allow flat networks with arbitrary
# physical_network names.
#
# flat_networks =
# Example:flat_networks = physnet1,physnet2
# Example:flat_networks = *

[ml2_type_vlan]
# (ListOpt) List of [::] tuples
# specifying physical_network names usable for VLAN provider and
# tenant networks, as well as ranges of VLAN tags on each
# physical_network available for allocation as tenant networks.
#
# network_vlan_ranges =
# Example: network_vlan_ranges = physnet1:1000:2999,physnet2

[ml2_type_gre]
# (ListOpt) Comma-separated list of : tuples enumerating ranges of GRE tunnel IDs that are available for tenant network allocation
tunnel_id_ranges = 1:1000

[ml2_type_vxlan]
# (ListOpt) Comma-separated list of : tuples enumerating
# ranges of VXLAN VNI IDs that are available for tenant network allocation.
#
# vni_ranges =

# (StrOpt) Multicast group for the VXLAN interface. When configured, will
# enable sending all broadcast traffic to this multicast group. When left
# unconfigured, will disable multicast VXLAN mode.
#
# vxlan_group =
# Example: vxlan_group = 239.1.1.1

[database]
sql_connection = mysql://neutron:password@192.168.1.110/neutron

[ovs]
enable_tunneling = True
local_ip = 172.16.0.10

[agent]
tunnel_types = gre
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf

[securitygroup]
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

Now save the file and let's update the new directory and the new file's group owner.

# chgrp -R neutron /etc/neutron/plugins

The ML2 plugin has been configured, we now need to update the services to use the plugin.

neutron-server startup configuration

Update the /etc/default/neutron-server file to point to the ML2 plugin ini file and save the file.

# defaults for neutron-server
# path to config file corresponding to the core_plugin specified in
# neutron.conf
NEUTRON_PLUGIN_CONFIG="/etc/neutron/plugins/ml2/ml2_conf.ini"

Neutron Open vSwitch agent


The neutron-plugin-openvswitch-agent service also uses the configuration provided by the ML2 plugin ini file. Open this file, update the second --config-file argument value to use the /etc/neutron/plugins/ml2/ml2_conf.ini file, and save the file.

...
exec start-stop-daemon --start --chuid neutron --exec /usr/bin/neutron-openvswitch-agent -- --config-file=/etc/neutron/neutron.conf --config-file=/etc/neutron/plugins/ml2/ml2_conf.ini --log-file=/var/log/neutron/openvswitch-agent.log

Both the neutron-server and neutron-plugin-openvswitch-agent services are configured. We now need to configure the unique files for each of the remaining services.

DHCP agent


Open the /etc/neutron/dhcp_agent.ini file with your favorite text editor, verify that the following properties are set correctly, update the file where required, and save it.

[DEFAULT]
...
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
...
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
...
use_namespaces = True
...

L3 agent


Open the /etc/neutron/l3_agent.ini file with your favorite editor, verify that the following properties are set correctly, update the file where required, and save it.

[DEFAULT]
...
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
...
use_namespaces = True
...

Metadata agent


Open the /etc/neutron/metadata_agent.ini file with your favorite editor, verify that the following properties are set correctly, update the file where required, and save it. Make sure that the metadata_proxy_shared_secret value is the same as the neutron_metadata_proxy_shared_secret property's value in the /etc/nova/nova.conf. The metadata agent uses the value as a password to communicate.

[DEFAULT]
...
auth_url = http://192.168.1.110:5000/v2.0
auth_region = RegionOne
admin_tenant_name = service
admin_user = neutron
admin_password = password
...
nova_metadata_url = 192.168.1.110
...
nova_metadata_port = 8775
...
metadata_proxy_shared_secret = helloOpenStack
...

Create the Open vSwitch bridges


The neutron-plugin-openvswitch-agent expects two bridges, the integration bridge and the tunnel bridge. Make sure you add the correct network interface to the br-ex bridge, you want to use whichever interface is configured for promiscuous mode.

# ovs-vsctl add-br br-int
# ovs-vsctl add-br br-ex
# ovs-vsctl add-port br-ex eth1

Restart the services and validate that everything works


At this point all of the Neutron services have been configured and we need to restart the services.

# restart neutron-server && restart neutron-plugin-openvswitch-agent && restart neutron-dhcp-agent && restart neutron-l3-agent && restart neutron-metadata-agent

To validate that the all of the Neutron services are running correctly we can use the neutron python client. As with nova-manage service list look for the :-) faces.

# neutron agent-list
+--------------------------------------+--------------------+---------------------------------+-------+----------------+
| id | agent_type | host | alive | admin_state_up |
+--------------------------------------+--------------------+---------------------------------+-------+----------------+
| 3e14909e-6efd-4047-8685-2926b78d8a58 | DHCP agent | havana-network | :-) | True |
| 42504987-f47f-491d-bbb9-e9e9ec9026b8 | Open vSwitch agent | havana-network | :-) | True |
| 66bc6757-5f42-446e-a158-511e6c271b0c | Open vSwitch agent | nova-compute | :-) | True |
| c6e5f3b5-8a16-409d-a005-eafa1cbb6bf8 | L3 agent | havana-network | :-) | True |
+--------------------------------------+--------------------+---------------------------------+-------+----------------+

Once the Neutron services have been restarted you should be able to log into Horizon. Open a web browser and point it to http://192.168.1.110/horizon, replacing the IP address with the havana-wfe IP or hostname. Hopefully Horizon will log in and present you the dashboard in all of its glory.

Updated: