When a distributed setup of Neutron is used, many Neutron configuration files must be synchronized to all nodes running neutron components. Linux network namespaces must be enabled.
neutron.conf
The base configuration file on the Control node contains many options. Some of these (not all!) are shown below in this deployment:
# # does not contain all options ! # [DEFAULT] max_l3_agents_per_router = 2 l3_ha = False allow_automatic_l3agent_failover = True allow_overlapping_ips = true # core_plugin = ml2 service_plugins = router,lbaas,vpnaas,metering,qos # force_gateway_on_subnet = true dhcp_options_enabled = False dhcp_agents_per_network = 1 # router_distributed = False router_delete_namespaces = True check_child_processes = True [securitygroup] firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver enable_ipset = True enable_security_group = True [agent] enable_distributed_routing = False dont_fragment = True arp_responder = False [fwaas] enabled = true driver = neutron.services.firewall.drivers.linux.iptables_fwaas.IptablesFwaasDriver [service_providers] service_provider=LOADBALANCER:Haproxy:neutron_lbaas.services.loadbalancer.drivers.haproxy.plugin_driver.HaproxyOnHostPluginDriver:default service_provider=VPN:openswan:neutron_vpnaas.services.vpn.service_drivers.ipsec.IPsecVPNDriver:default
ml2_conf.ini
The ml2 config file can be deployed on all Network, Compute and Control Nodes. The only difference is, that „local_ip“ must be set on Network and Compute Nodes to the IP address of the local „l3vxlan“ interface (see part 2).
[ml2] type_drivers = vxlan,local,vlan,flat,geneve tenant_network_types = vxlan mechanism_drivers = openvswitch extension_drivers = port_security [ml2_type_vxlan] vni_ranges = 65537:69999 [ml2_type_vlan] network_vlan_ranges = vlannet:100:299 [ml2_type_flat] flat_networks = * # the ovs section is only needed on compute and network nodes # where the openvswitch and the L2 agent are running # but it does not hurt, if it is also included on the control node [ovs] bridge_mappings = vlannet:br-vlan tunnel_type = vxlan tunnel_bridge = br-tun integration_bridge = br-int tunnel_id_ranges = 65537:69999 enable_tunneling = True tenant_network_type = vxlan ###>>>>>>>>> local_ip is only used on compute and network nodes ### # local_ip = <ip address of the l3vxlan interface> [agent] tunnel_types = vxlan l2_population = False
The vxlan vni range is set to 65537:69999, which is not the default value. I prefer this setting, because it is easy to distinguish vxlan vni values from vlan values, which are in the range 0 – 4095. This avoids any confusion when an error occurs and Openflow entries must be debugged.
The vlan type driver requires only one configuration line [see also].
- vlannet is the internal Openstack name of the vlan range used also in the Openstack API
- the vlan range in this deployment is 100:299
To keep the configuration simple, we do not create two configuration lines here. We are only using the vlans 100-101 and 200-201, therefore we should have two mappings.
In the ovs section for the openvswitch mechanism driver, the internal Openstack name vlannet is mapped to a Openvswitch bridge using one configuration item
- vlannet (the Openstack name) is mapped to br-vlan (by the mechanism driver Openvswitch)
The mechanism driver l2population is not used, the benefit using it is too low compared to the risk of breaking the network.
l3_agent.ini
The L3 agent on the network nodes requires two entries, which must be set to an empty string.
# # does not show all necessary options # [DEFAULT] # # very important - set the two following entries to an empty string # do not leave the default values gateway_external_network_id = external_network_bridge = # # we use the legacy mode - HA and DVR are broken in Juno and should not used in production environments agent_mode = legacy # # nova metadata is deployed only on the network node(s) and listens on 127.0.0.1 metadata_port = 8775 metadata_ip = 127.0.0.1 enable_metadata_proxy = True # handle_internal_only_routers = true router_delete_namespaces = True # # veths should be avoided ovs_use_veth = false # interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver use_namespaces = true
By setting gateway_external_network_id and external_network_bridge to an empty string, the L3 agent starts searching for networks being defined as „external“ by the admin and vlans can be used to connect the routers to. No extra bridges are needed.
As we deploy nova_metadata on the network node, the metadata_ip is set to 127.0.0.1.
dhcp_agent.ini
The dhcp agent on the network node requires at least:
# # does not show all necessary options # [DEFAULT] dhcp_delete_namespaces = True enable_metadata_network = false enable_isolated_metadata = true use_namespaces = true dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq ovs_use_veth = false interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver dhcp_agent_manager = neutron.agent.dhcp_agent.DhcpAgentWithStateReport
metadadata_agent.ini
The neutron metadata agent on the network node requires at least:
# # does not show all necessary options # [DEFAULT] metadata_proxy_shared_secret = METADATAsecret nova_metadata_ip = 127.0.0.1 verbose = true
As we install nova metadata on the network node, the listener is on the same node and we find nova_metadata on 127.0.0.1
nova-metadata.conf
For nova-metadata we enable only the metadata service and we listen only to 127.0.0.1
# # does not show all necessary options # [DEFAULT] metadata_host = 127.0.0.1 metadata_listen = 127.0.0.1 use_forwarded_for = true # metadata_workers = 4 metadata_listen_port = 8775 service_neutron_metadata_proxy = true neutron_metadata_proxy_shared_secret = METADATAsecret # neutron_ovs_bridge = br-int neutron_use_dhcp = True linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver security_group_api = neutron network_api_class = nova.network.neutronv2.api.API # # enable only the metadata service # enabled_apis = metadata