Open Cloud Blog

Openstack and more

OpenStack Liberty Neutron Deployment (Part 3 Neutron Config)

When a distributed setup of Neutron is used, many Neutron configuration files must be synchronized to all nodes running neutron components. Linux network namespaces must be enabled.

neutron.conf

The base configuration file on the Control node contains many options. Some of these (not all!) are shown below in this deployment:

ml2_conf.ini

The ml2 config file can be deployed on all Network, Compute and Control Nodes. The only difference is, that „local_ip“ must be set on Network and Compute Nodes to the IP address of the local „l3vxlan“ interface (see part 2).

The vxlan vni range is set to 65537:69999, which is not the default value. I prefer this setting, because it is easy to distinguish vxlan vni values  from vlan values, which are in the range 0 – 4095. This avoids any confusion when an error occurs and Openflow entries must be debugged.

The vlan type driver requires only one configuration line [see also].

  • vlannet is the internal Openstack name of the vlan range used also in the Openstack API
  • the vlan range in this deployment is 100:299

To keep the configuration simple, we do not create two configuration lines here. We are only using the vlans 100-101 and 200-201, therefore we should have two mappings.

In the ovs section for the openvswitch mechanism driver, the internal Openstack name vlannet is mapped to a Openvswitch bridge using one configuration item

  • vlannet (the Openstack name) is mapped to br-vlan (by the mechanism driver Openvswitch)

The mechanism driver l2population is not used, the benefit using it is too low compared to the risk of breaking the network.

l3_agent.ini

The L3 agent on the network nodes requires two entries, which must be set to an empty string.

By setting gateway_external_network_id and external_network_bridge to an empty string, the L3 agent starts searching for networks being defined as „external“ by the admin and vlans can be used to connect the routers to. No extra bridges are needed.

As we deploy nova_metadata on the network node, the metadata_ip is set to 127.0.0.1.

dhcp_agent.ini

The dhcp agent on the network node requires at least:

metadadata_agent.ini

The neutron metadata agent on the network node requires at least:

As we install nova metadata on the network node, the listener is on the same node and we find nova_metadata on 127.0.0.1

nova-metadata.conf

For nova-metadata we enable only the metadata service and we listen only to 127.0.0.1

Continue reading (part 4)

Updated: 17/01/2016 — 12:20
Open Cloud Blog © 2013-2015 Impressum Frontier Theme