After the successful deployment of the Operating system and the Openstack software, the external networks and the shared flat networks must be allocated by the Openstack admin.
Allocate the floating pools (external networks)
The neutron commands to create the two external networks for floating pools are:
# run with admin credentials # # "vlannet" is defined in ml2_ini.conf and provides # the mapping information needed by the admin when creating the networks NETNAME=vlannet # we use google as the DNS server 8.8.8.8 # ################## # # Pool 1 198.18.0.1/20 on vlan 100 EXTNET=floating-198-18-0 VLAN=100 # Allocate the broadcast domain neutron net-create --provider:network_type vlan --provider:physical_network=$NETNAME \ --router:external true --provider:segmentation_id $VLAN ${EXTNET} # # get the id of the created network NID=$(neutron net-external-list -f csv | grep $EXTNET | cut -d ',' -f 1 | sed 's/"//g') # Assign the IP pool to the broadcast domain neutron subnet-create --allocation-pool start=198.18.1.0,end=198.18.14.255 --ip-version 4 --gateway 198.18.0.1 \ --disable-dhcp --name $EXTNET --dns-nameserver 8.8.8.8 $NID 198.18.0.0/20 # ################### # # Pool 2 198.18.16.1/20 on vlan 101 EXTNET=floating-198-18-16 VLAN=101 neutron net-create --provider:network_type vlan --provider:physical_network=$NETNAME \ --router:external true --provider:segmentation_id $VLAN ${EXTNET} # # get the id of the created network NID=$(neutron net-external-list -f csv | grep $EXTNET | cut -d ',' -f 1 | sed 's/"//g') # Assign the IP pool to the broadcast domain neutron subnet-create --allocation-pool start=198.18.17.0,end=198.18.30.255 --ip-version 4 --gateway 198.18.16.1 \ --disable-dhcp --name $EXTNET --dns-nameserver 8.8.8.8 $NID 198.18.16.0/20 # # list the external networks # neutron net-external-list +--------------------------------------+---------------------+-----------------------------------------------------+ | id | name | subnets | +--------------------------------------+---------------------+-----------------------------------------------------+ | f79385f6-e878-4450-9ed9-e906f6985149 | floating-198-18-0 | 7b9a75c2-fbbc-455b-9aa7-1a1bf286571e 198.18.0.0/20 | | 97d1c4c7-c5a2-4399-9d12-cf9bf6bef739 | floating-198-18-16 | 3af17d23-8229-4022-a49f-f8b41939adc9 198.18.16.0/20 | +--------------------------------------+---------------------+-----------------------------------------------------+
The net-create commands are using the following arguments:
- –provider:network_type vlan maps the network to the ml2 type driver vlan
- –provider:physical_network=$NETNAME (NETNAME=vlannet) tells the vlan mechanism driver (openvswitch) to use the bridge „vlannet“, which is mapped to br-vlan
- –provider:segmentation_id $VLAN to map the network to $VLAN on br-vlan
- –router:external true tells Openstack Neutron, that this is a floating pool. This allows non admin users to attach routers to this network and allows the usage of floating IP addresses. It prohibits the attachment of VMs to this network for non admin tenants.
- the last argument is the name of the network
The subnet-create commands are using the following arguments:
- –allocation-pool start=198.18.x.x,end=198.18.y.y is the range of IP addresses which may be used by routers and floating IP addresses
- –ip-version 4 tells Openstack, that this is a IPv4 subnet
- –gateway 198.18.a.a is the IP address of the physical router on the corresponding vlan in the physical infrastructure
- –disable-dhcp — we do not want DHCP on this network – it is useless as no VMs are deployed to this network
- –dns-nameserver 8.8.8.8 is the nameserver
- $NID is the ID of the network previously created with the neutron net-create command
- the last argument is the CIDR of the created subnet
Allocate the flat pools (to attach VMs without NAT)
Two flat pools should be provided. These two shared networks can be used by all tenants:
# # run with admin credentials # # "vlannet" is defined in ml2_ini.conf and provides # the mapping information needed by the admin when creating the networks NETNAME=vlannet # we use google as the DNS server 8.8.8.8 # #################### # # Pool 1 198.19.1.1/24 on vlan 200 EXTNET=flat-198-19-1 VLAN=200 # Allocate the broadcast domain neutron net-create ${EXTNET} --provider:network_type vlan --provider:physical_network=$NETNAME \ --router:external false --provider:segmentation_id $VLAN --shared # # get the id of the created network NID=$(neutron net-list -f csv | grep $EXTNET | cut -d ',' -f 1 | sed 's/"//g') # Assign the IP pool to the broadcast domain neutron subnet-create --allocation-pool start=198.19.1.100,end=198.19.1.199 --ip-version 4 \ --no-gateway --host-route destination=0.0.0.0/0,nexthop=198.19.1.1 \ --enable-dhcp --name $EXTNET --dns-nameserver 8.8.8.8 $NID 198.19.1.0/24 # ##################### # # Pool 2 198.19.2.1/24 on vlan 201 EXTNET=flat-198-19-2 VLAN=201 # Allocate the broadcast domain neutron net-create ${EXTNET} --provider:network_type vlan --provider:physical_network=$NETNAME \ --router:external false --provider:segmentation_id $VLAN --shared # # get the id of the created network NID=$(neutron net-list -f csv | grep $EXTNET | cut -d ',' -f 1 | sed 's/"//g') # Assign the IP pool to the broadcast domain neutron subnet-create --allocation-pool start=198.19.2.100,end=198.19.2.199 --ip-version 4 \ --no-gateway --host-route destination=0.0.0.0/0,nexthop=198.19.2.1 \ --enable-dhcp --name $EXTNET --dns-nameserver 8.8.8.8 $NID 198.19.2.0/24 # # list all networks neutron net-list
The net-create commands are using the following arguments:
- –provider:network_type vlan maps the network to the ml2 type driver vlan
- –provider:physical_network=$NETNAME (NETNAME=vlannet) tells the vlan mechanism driver (openvswitch) to use the bridge „vlannet“, which is mapped to br-vlan
- –provider:segmentation_id $VLAN to map the network to $VLAN on br-vlan
- –router:external false (the default setting) tells Openstack Neutron, that this is not a floating pool. VMs may be attached to this network.
- –shared sets the shared flag and all tenants may use this network
- the last argument is the name of the network
The subnet-create commands are using the following arguments:
- –allocation-pool start=198.19.x.x,end=198.19.y.y is the range of IP addresses on this subnet
- –ip-version 4 tells Openstack, that this is a IPv4 subnet
- –no-gateway disables the default route for this subnet. This is necessary, because the gateway for this network is a physical router and the physical router cannot provide a metadata service. By default neutron provides the neutron metadata services on a router created by the L3 agent. The option –no-gateway has the side effect, that a static route to 169.254.0.0/16 is inserted by the DHCP agent when providing the network configuration for VMs. This makes the metadata service available on the flat network via the DHCP agent providing a metadata service in the DHCP network namespace.
- –host-route destination=0.0.0.0/0,nexthop=198.19.a.a establishes a default route to be distributed to the VMs on this subnet. Since we want a default route and the standard setting „–gateway“ cannot be used we insert a default route with the host-route option
- –enable-dhcp — we do want DHCP on this network
- –dns-nameserver 8.8.8.8 is the nameserver
- $NID is the ID of the network previously created with the neutron net-create command
- the last argument is the CIDR of the created subnet
Openvswitch commands
The userland tools of Openvswitch provide the necessary tools to show data:
ovs-appctl dpif/show: show the interfaces and their port numbers on all bridges
ovs-ofctl dump-flows <bridge name>: show all Openflow rules on the bridge <br-vlan>
ovs-dpctl dump-flows: show all data flows
ovsdb-tool show-log: show all configuration commands, which have been sent to Openvswitch by userland tools
Current state
Now the infrastructure has been deployed. Let’s take a view on the network and compute node.
The network configuration on the network and compute node is nearly the same. The bridge br-uplink is not used by Openstack.
br-vlan
br-vlan is used by Openstack Neutron to implement the mVlan mapping between node local vlan ids and global Vlan ids. Two Openflow rules show up on br-vlan. [ovs-ofctl dump-flows br-vlan]
cookie=0x0, duration=16s, table=0, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=2,in_port=2 actions=drop cookie=0x0, duration=16s, table=0, n_packets=0, n_bytes=0, idle_age=2, hard_age=65534, priority=1 actions=NORMAL
What is connected to port 2 on br-vlan? ovs-appctl dpif/show tells this:
<a few lines have been dropped> br-vlan: br-vlan 65534/8: (internal) patch-to-uplink 1/none: (patch: peer=patch-to-vlan) phy-br-vlan 2/none: (patch: peer=int-br-vlan) <a few lines have been dropped>
port 2 is the patch port between br-int and br-vlan (2/none), which is set up by Openstack. So any traffic entering br-vlan on the patch port from br-int is dropped. All other traffic is forwarded like on a switch with MAC learning.
br-int
br-int is the integration bridge. In this example deployment all network ports from VMs, router namespaces, DHCP namespaces, LBaas namespaces… will be connected to this bridge. The following Openflow rules show up on br-int:
cookie=0x0, duration=16s, table=0, n_packets=51, n_bytes=1433, idle_age=2, hard_age=65534, priority=2,in_port=1 actions=drop cookie=0x0, duration=16s, table=0, n_packets=29, n_bytes=572, idle_age=23, hard_age=65534, priority=1 actions=NORMAL cookie=0x0, duration=16s, table=23, n_packets=0, n_bytes=0, idle_age=65534, hard_age=65534, priority=0 actions=drop
What is connected to port 1 on br-int?
<a few lines have been dropped> br-int: br-int 65534/7: (internal) int-br-vlan 1/none: (patch: peer=phy-br-vlan) patch-tun 2/none: (patch: peer=patch-int) <a few lines have been dropped>
port 1 is the patch port between br-int and br-vlan (1/none), which is set up by Openstack. So any traffic entering br-int on the patch port from br-vlan is dropped. All other traffic is forwarded like on a switch with MAC learning.
br-tun
The article Openstack Neutron using VXLAN discusses the Openflow rules on br-tun. Openstack Liberty/Juno made a few changes (e.g. other/additional Openflow tables) – but this differs only slightly from Openstack Havana. The article will be updated when Openstack Mitaka has been released.