OpenStack Liberty Neutron Deployment (Part 3 Neutron Config)

When a distributed setup of Neutron is used, many Neutron configuration files must be synchronized to all nodes running neutron components. Linux network namespaces must be enabled.


The base configuration file on the Control node contains many options. Some of these (not all!) are shown below in this deployment:

# does not contain all options !
max_l3_agents_per_router = 2
l3_ha = False
allow_automatic_l3agent_failover = True
allow_overlapping_ips = true
core_plugin = ml2
service_plugins = router,lbaas,vpnaas,metering,qos
force_gateway_on_subnet = true
dhcp_options_enabled = False
dhcp_agents_per_network = 1
router_distributed = False
router_delete_namespaces = True
check_child_processes = True

firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_ipset = True
enable_security_group = True

enable_distributed_routing = False
dont_fragment = True
arp_responder = False

enabled = true
driver =



The ml2 config file can be deployed on all Network, Compute and Control Nodes. The only difference is, that „local_ip“ must be set on Network and Compute Nodes to the IP address of the local „l3vxlan“ interface (see part 2).

type_drivers = vxlan,local,vlan,flat,geneve
tenant_network_types = vxlan
mechanism_drivers = openvswitch
extension_drivers = port_security

vni_ranges = 65537:69999

network_vlan_ranges = vlannet:100:299

flat_networks = *

# the ovs section is only needed on compute and network nodes
# where the openvswitch and the L2 agent are running
# but it does not hurt, if it is also included on the control node
bridge_mappings = vlannet:br-vlan
tunnel_type = vxlan
tunnel_bridge = br-tun
integration_bridge = br-int
tunnel_id_ranges = 65537:69999
enable_tunneling = True
tenant_network_type = vxlan
###>>>>>>>>> local_ip is only used on compute and network nodes ### 
# local_ip = <ip address of the l3vxlan interface>

tunnel_types = vxlan
l2_population = False

The vxlan vni range is set to 65537:69999, which is not the default value. I prefer this setting, because it is easy to distinguish vxlan vni values  from vlan values, which are in the range 0 – 4095. This avoids any confusion when an error occurs and Openflow entries must be debugged.

The vlan type driver requires only one configuration line [see also].

  • vlannet is the internal Openstack name of the vlan range used also in the Openstack API
  • the vlan range in this deployment is 100:299

To keep the configuration simple, we do not create two configuration lines here. We are only using the vlans 100-101 and 200-201, therefore we should have two mappings.

In the ovs section for the openvswitch mechanism driver, the internal Openstack name vlannet is mapped to a Openvswitch bridge using one configuration item

  • vlannet (the Openstack name) is mapped to br-vlan (by the mechanism driver Openvswitch)

The mechanism driver l2population is not used, the benefit using it is too low compared to the risk of breaking the network.


The L3 agent on the network nodes requires two entries, which must be set to an empty string.

# does not show all necessary options
# very important - set the two following entries to an empty string
# do not leave the default values
gateway_external_network_id =  
external_network_bridge =  
# we use the legacy mode - HA and DVR are broken in Juno and should not used in production environments
agent_mode = legacy
# nova metadata is deployed only on the network node(s) and listens on
metadata_port = 8775
metadata_ip =
enable_metadata_proxy = True
handle_internal_only_routers = true
router_delete_namespaces = True
# veths should be avoided
ovs_use_veth = false
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
use_namespaces = true

By setting gateway_external_network_id and external_network_bridge to an empty string, the L3 agent starts searching for networks being defined as „external“ by the admin and vlans can be used to connect the routers to. No extra bridges are needed.

As we deploy nova_metadata on the network node, the metadata_ip is set to


The dhcp agent on the network node requires at least:

# does not show all necessary options
dhcp_delete_namespaces = True
enable_metadata_network = false
enable_isolated_metadata = true
use_namespaces = true
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
ovs_use_veth = false
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_agent_manager = neutron.agent.dhcp_agent.DhcpAgentWithStateReport


The neutron metadata agent on the network node requires at least:

# does not show all necessary options
metadata_proxy_shared_secret = METADATAsecret
nova_metadata_ip =
verbose = true

As we install nova metadata on the network node, the listener is on the same node and we find nova_metadata on


For nova-metadata we enable only the metadata service and we listen only to

# does not show all necessary options
metadata_host =
metadata_listen =
use_forwarded_for = true
metadata_workers = 4
metadata_listen_port = 8775
service_neutron_metadata_proxy = true
neutron_metadata_proxy_shared_secret = METADATAsecret
neutron_ovs_bridge = br-int
neutron_use_dhcp = True
linuxnet_interface_driver =
security_group_api = neutron
network_api_class =
# enable only the metadata service
enabled_apis = metadata

Continue reading (part 4)

Updated: 17/01/2021 — 14:35