Switching Performance – Chaining OVS bridges

The Openvswitch installed on a Linux system provides the feature to configure multiple bridges. Multiple OVS bridges behave like independent local switches. This is the way OVS is providing switch virtualization.

It is possible to chain multiple OVS bridges on one system. Chaining multiple bridges is used by Openstack neutron, when using the default networking setup.

If Neutron is configured to use VXLAN or GRE tunnels, the integration bridge br-int is connected to the tunnel bridge br-tun. Neutron is using two Openvswitch patch ports to connect br-int and br-tun. If Neutron is configured to use Flat or Vlan networking, the integration bridge br-int is connected to the bridge br-eth1 (br-eth1 is used here as an example), with the physical NIC attached to it, using a Linux veth pair. This setup is shown in the following drawing.

Openstack-Node

Openstack-Node

One question is now: What is the performance loss, if multiple OVS bridges are chained using OVS patch ports or Linux veth pairs?

Testing method

In a previous article (Switching Performance – Connecting Linux Network Namespaces) I showed the performance of different methods to interconnect linux namespaces. The same procedure will be used to check the performance of chained Openvswitch bridges.

Two Linux network namespaces with attached ip interfaces will be used as the source and destination of traffic. The test tool used is iperf, using a different number of threads. TSO and other offloading features of the virtual nic will be switched on and off.  A setup with three OVS bridges is shown in the drawing below.

OVS-ChainingThe number of bridges is 1 to 9.

The OVS patch ports between the bridges are created using the following commands:

function f_create_ovs_chain_patch {
    local BRIDGEPREFIX=$1
    local NUMOFOVS=$2
    #
    # create the switches
    for I in $(seq 1 $NUMOFOVS)
    do
      BNAMEI="$BRIDGEPREFIX-$I"
      ovs-vsctl add-br $BNAMEI
      if [ $I -gt 1 ]
      then
        let K=I-1
        BNAMEK="$BRIDGEPREFIX-$K"
        PNAMEI="patch-CONNECT-$I$K"
        PNAMEK="patch-CONNECT-$K$I"
        # need to connect this bridge to the previous bridge
        ovs-vsctl add-port $BNAMEI $PNAMEI -- set Interface $PNAMEI type=patch options:peer=$PNAMEK
        ovs-vsctl add-port $BNAMEK $PNAMEK -- set Interface $PNAMEK type=patch options:peer=$PNAMEI
      fi
    done
}

The Linux veth pairs between the bridges are created using the following commands:

function f_create_ovs_chain_veth {
    local BRIDGEPREFIX=$1
    local NUMOFOVS=$2
    local VETHPREFIX="ovschain"
    echo "*** Creating interfaces"
#
# create the switches
    local I
    for I in $(seq 1 $NUMOFOVS)
    do
      BNAMEI="$BRIDGEPREFIX-$I"
      ovs-vsctl add-br $BNAMEI
      if [ $I -gt 1 ]
      then
        let K=I-1
        BNAMEK="$BRIDGEPREFIX-$K"
        PNAMEI="$VETHPREFIX-$I$K"
        PNAMEK="$VETHPREFIX-$K$I"
        # need to connect this bridge to the previous bridge
        ip link add $PNAMEI type veth peer name $PNAMEK
        ovs-vsctl add-port $BNAMEI $PNAMEI
        ovs-vsctl add-port $BNAMEK $PNAMEK
        ip link set dev $PNAMEI up
        ip link set dev $PNAMEK up
      fi
    done
}

It must be ensured, that there are no iptables rule active when running tests using veth pairs.

Results

The chart below shows the test results using my Linux desktop (i5-2550 CPU@3.30GHz, 32 GB RAM, Kernel 3.13, OVS 2.01, Ubuntu 14.04). The values shown for one OVS bridge are used as the baseline to compare to.

OVS-perf-1c-4t The chart shows, that the OVS using OVS patch ports behaves well. When one iperf thread is running in each namespace, this consumes 1.8 CPU cores, 1.0 for the sender side and 0.8 for the receiver side. The four cpu cores are running at 100% usage, when using four or more threads. The throughput is independent (within the measurement precision) from the number of chained OVS bridges, at least up to the tested 8.

Connecting OVS bridges using veth pairs behaves quite different. The numbers for two bridges are showing a performance loss of 10% when running with one or two iperf threads. When using more threads, the veths behave very bad. This might be a result of not having enough CPU cores, but compared to the OVS patch port, these numbers are very bad.

I’m looking for servers providing more CPU cores and sockets to run the tests on single and dual socket Xeons (16 or 20 cores including hyperthreading per socket). But I have no access to such systems.

Conclusion

My conclusion from the test is:

  • use OVS patch ports (in Openstack Juno the connection between br-int and the bridges for vlans is not longer using veths — very good!)
  • do not use Linux veth pairs

Openstack Neutron Juno is using the veths method for the Hybrid driver.

Updated: 17/01/2021 — 13:18