Openstack Networking (Type driver vlan and Openvswitch)

The most challenging task, when starting to work with Openstack is the setup of the networking. There is the physical network infrastructure used by the hardware components of the cloud and the virtual network infrastructure used by the tenants of the cloud. The virtual network infrastructure must run/be mapped to the physical network infrastructure.

When using the Openvswitch to attach virtual network interfaces to the node internal switching layer, the mechanism driver „openvswitch“ must be used. The Openstack documentation shows a few use cases, how to set up the ovs bridges on compute and network nodes. Depending on the type of the network (vlan, vxlan, flat,…), different suggested setups are shown in the Openstack documentation. These setups are taken by many people as the only ones, which are supported. But this is not true. Openstack networking using the Openvswitch can be set up using only one interfaces for all customer traffic, even when using Vxlan, Vlan and Flat networking on the same compute or network node at the same time.

I recommend to set up the basic OS layer networking on compute and networking in the same way for both node types.

The following is applicable to the Openstack release Icehouse, Juno needs to be checked.

The bridge br-int

br-int is the default name for the core bridge used on compute and network nodes. On the compute nodes, all VMs are connected direct or indirect to br-int. DHCP servers are also connected to br-int. Router uplinks from L3 agents may be connected to br-int, when no external bridge is configured. Router links to customer networks are always connected to br-int.

If tunneling is used, the tunneling bridge br-tun is connected to br-int. This connection is managed by Openstack neutron and is using an Openvswitch patch port.

If Vlan or Flat network types are used, another bridge, e.g. br-eth1, is connected to br-int. This connection is also managed by Openstack neutron and is using a Linux veth pair, which is a serious performance bottleneck.

Type driver Vlan

When using Vlans in the physical infrastructure, which should be available in Openstack, the type driver vlan must be used. The Openstack default setup is shown in the following figure:

Infrastructure setup using the type driver vlan

Infrastructure setup using the type driver vlan and one physical interface

On the operating system layer the following commands must be executed, before starting any Openstack component.

#
# the integration bridge
#
ovs-vsctl add-br br-int
#
# the bridge to attach the vlans
#
ovs-vsctl add-br br-eth1
#
# add eth1 to br-eth1. 
# By default all ports on ovs are dot1q tagged
# all vlans are allowed. This is OK, if the physical switch restricts the vlan range
ovs-vsctl add-port br-eth1 eth1
#

In Ubuntu, these commands can be integrated into /etc/network/interfaces (-> article Boot integration of the Openvswitch in Ubuntu).

If we assume, that the physical switch provides the vlan range 100-199 to be used by Openstack. The physical switchport on the switch must be dot1q tagged and the valid range of vlans must be restricted to the range 100-199. Why? In the case of an Openstack neutron failure or error, it might be possible, that the Openflow rules to map the local vlans to the global vlans on br-int are wrong. The result will be a mixing of local vlan and global vlan IDs.
The ml2 config may look like:

[ml2]
type_drivers=vxlan,local,vlan
mechanism_drivers=openvswitch
#
[ml2_type_vlan]
# this tells Openstack that the internal name "physnet1" provides the vlan range 100-199
network_vlan_ranges = physnet1:100:199
#
[ovs]
# this tells Openstack 
#   * to use the openvswitch bridge br-eth1
#   * that br-eth1 is mapped the internal name "physnet1" - the allowed vlan range is taken from the section ml2_type_vlan
bridge_mappings = physnet1:br-eth1
#
# "physnet1" is referenced by the neutron command line, when creating neutron provider networks 
# or the tenant network type is vlan

This configuration tells the Openstack and the Openvswitch mechanism driver, that

  • br-eth1 is provided by the admin to Openstack as a bridge to access Vlans
  • br-eth1 is mapped to the Openstack internal name „physnet1“
  • the Vlan range 100-199 is accessible via br-eth1
  • Openflow rules are deployed to br-eth1 to map between the node local vlans on br-int and the global vlans on br-eth1
  • the Openvswitch mechanism driver creates the dot1q tagged connection between br-int and br-eth1 during startup

Openstack does not make any assumptions on how br-eth1 is connected to the physical switch. The connection configuration shown in the Openstack documentation is just the simple case. Use one dot1q tagged link (eth1) between br-eth1 and the physical switch.

Type driver vlan and two uplinks

The following figure shows a setup using two tagged links on the bridge to attach the physical links. In this example, the bridge is named br-vlan, because there is no naming dependency between the bridge and a physical interface. br-vlan should be used as a name in any configuration to make the connection to the vlan type driver, instead of using br-eth1 and making the connection to eth1.

Infrastructure setup using the type driver vlan and two physical interfaces and two switches

Infrastructure setup using the type driver vlan and two physical interfaces and two switches

We assume, that the vlans 100-199 should be transported on eth1 and the vlans 200-299 should be transported using eth2. On the operating system layer the following commands must be executed, before starting any Openstack component.

#
# the integration bridge
#
ovs-vsctl add-br br-int
#
# the bridge to attach the vlans
#
ovs-vsctl add-br br-vlan
#
# add eth1 and eth2 to br-vlan. 
# By default all ports on ovs are dot1q tagged
# all vlans are allowed on eth1 and eth2. This is OK, if the physical switch restricts the vlan range
ovs-vsctl add-port br-vlan eth1
ovs-vsctl add-port br-vlan eth2
#
# the vlan range on eth1 or eth2 can be restricted on the ovs...
# but all vlans must be listed - ranges are not possible - very ugly
#
# ovs-vsctl add-port br-vlan eth1 trunks=100,101,102,103,104,....

The ml2 config may look like:

[ml2]
type_drivers=vxlan,local,vlan
mechanism_drivers=openvswitch
#
[ml2_type_vlan] 
# this tells Openstack that the internal name "physnet1" provides the vlan range 100-299 
network_vlan_ranges = physnet1:100:299 
# [ovs] 
# this tells Openstack 
# * to use the openvswitch bridge br-eth1 
# * that br-vlan is mapped the internal name "physnet1" - the allowed vlan 
# range is taken from the section ml2_type_vlan 
bridge_mappings = physnet1:br-vlan 
# 
# "physnet1" is referenced by the neutron command line, when creating neutron provider networks 
# or the tenant network type is vlan

One can see in the ML2 configuration, that there is NO reference to the physical ethernet interfaces eth1 and eth2. Openstack does not care, how the external connection is set up on the bridge providing vlans for the vlan type driver.

Untagged Link

The link between br-vlan and the physical switch needs not to be tagged, as shown in the next figure. This is not a very useful practical setup, it shows one other possible setup.

Infrastructure setup using the type driver vlan, one physical interfaces and an untagged link

Infrastructure setup using the type driver vlan, one physical interfaces and an untagged link

eth1 is not tagged. On br-vlan, one dummy vlan (888 in our example) is used to map eth1 to a vlan, which can be referenced by Openstack. It is a good idea to use the same Vlan ID on br-vlan as on the external switch, if the switch is using vlans. On the operating system layer the following commands must be executed, before starting any Openstack component.

#
# the integration bridge
#
ovs-vsctl add-br br-int
#
# the bridge to attach the vlans
#
ovs-vsctl add-br br-vlan
#
# add eth1 to br-vlan. 
ovs-vsctl add-port br-vlan eth1 tag=888
# tag=888 does not create a dot1q tagged link -- it creates an access port and
# assigns this port to vlan 888. eth1 is untagged !

Openstack requires a Vlan on br-vlan when using the type driver vlan – we use 888. The ml2 config is using only one vlan. The vlan ID 888 must be used, when creating a network in Openstack. This example makes only sense, if only one (shared) provider network is used to attach VMs to.

Conclusion

There is no need to use just one network interface when using the vlan type driver. More than one interface might be used on the uplink side of br-vlan. These interfaces might be tagged or untagged.

The fact, that Openstack does not care how br-vlan is connected to the external network, leads to another interesting solution to connect br-vlan to the outside world. More to come…

Updated: 17/01/2021 — 13:17