OpenStack Liberty Neutron Deployment (Part 1 Overview)

The deployment of OpenStack is a complex task. The most important task at the beginning is to figure out what YOU (or the company you are working for) want to implement using OpenStack as a tool to provide infrastructure resources in a data center. Will the Openstack based cloud be used by your internal customers (the tenants) using a Web or CLI based API or is the Openstack cloud not seen by the tenants. In the last case, the cloud is only used by the IT department to make their working life more easy. But the „easy“ has a high price – the added complexity.

If you are looking into Openstack from the perspective of an Enterprise or a Service Provider, you must be aware of the fact that Openstack does not fully support classic data center infrastructures, which are provisioned by the tenants. There are large feature gaps in OpenStack’s virtual networking. If you do not want to use an Amazon like networking using NAT in your internal network and keep your non NAT internal network using unique IP addresses, you must be aware, that Openstack Liberty do not support this model, when tenants are creating their own networks. Openstack has (up to now Feb/2016) no classic (Enterprise or Service provider grade) IP network management. It has only an IP address management for networks created by the admin of the cloud or the tenants. If your internal tenants do not have API access to the cloud, your IT admins can mitigate this by using admin rights to create a network infrastructure using unique IP addresses.

In this article I show a deployment for Openstack Liberty using only free software. The focus will be OpenStack’s  networking service Neutron. HA for any Openstack management component will not be covered. No block or object storage will be used.

I assume, that an Operating system (e.g. Ubuntu 14.04) is used, which is not too old. Network namespaces must be supported by the OS and the Openvswitch version number should be >= 2.4

Minimal Openstack Installation

There are many example deployments for Openstack on the web. But there are other solutions how to deploy the networking of Openstack (Juno/Kilo and Liberty).

Openstack requires control software. This control software is deployed on one node (oscontrol — the Openstack Controller — physical or virtual). Many Openstack deployments implement the API for the tenants also on the control node — this is a very bad idea. Tenants must never talk to the control node directly! The API to be used by the tenants will be provided using a second node (apihost — physical or virtual). On the other side, the admins of the cloud should not use the public API endpoint to manage the cloud. The API endpoint for the admins could be run on the control node.

Openstack requires also compute nodes to provide virtual compute resources for the tenants. Any finally we have the network nodes providing network support functions. Compute and Network Nodes must be deployed on physical hardware. Openstack itself does not require network nodes to build a cloud. Using network nodes or not, is a decision, which everyone has to make before deploying a cloud. The decision might be influenced by the amount of available physical machines to build your cloud, it might be influenced by security requirements of your company, it might be influenced by the structure of your IT organization and many more.

The deployment, which is shown in this article, gives you the freedom to decide to use compute and network nodes or to use combined nodes serving compute and network. Using network nodes in a deployment helps to assign operational responsibilities to different operation teams and it makes it at the beginning of your Openstack journey easier to understand, which Openstack component is doing what.

The following figure shows which software component is installed on which node.

Openstack minimal setup

Openstack minimal setup

 

As noted above, I will not care about HA for the management components.

Openstack requires at least one management network to be used for internal communication between the nodes. This may be the same network as for the system management of the infrastructure.

Customers/tenants must not use the internal management network for any access to systems. It is a good practice to use a separate network to access the Openstack Dashboard and the API endpoints (the public ones).

One network is used to provide the Overlay, in the example VXLAN is used.

In our example, we use multiple networks to attach tenant VMs or routers to the public/external network.

Many Openstack deployments are missing a clear separation functions mapped to networks. This leads to confusion, because the setup connects „everything to everything“ and packets will find their way.

The OpenStack Controller

The OpenStack controller (Control Node) is running all software components required to manage the cloud. The Control Node must not be accessible by any tenant. This is the central administration point! These components are

  • mariadb, the database
  • rabbitmq, the message bus
  • memcache used by software, which is deployed on the host
  • Keystone to handle AAA (at least two of the A): tokens, users, groups, roles, endpoints,…
  • apache2 to provide the frontend for Keystone. Since Openstack Liberty, Keystone needs a web server to provide the API endpoint.
  • Glance services to handle the base boot images for VMs. If you have a large cloud deployment, its not a good idea to deploy the glance storage backend on the control node.
  • Cinder services to provide block storage (we ignore it). Cinder requires a storage backend, which is usually not deployed on the control node.
  • Heat to provide deployments (we ignore it)
  • several Designate services to provide DNSaaS. Only the management components are running on the control nodes. The bind backend to be used by tenants must not run on the control node.
  • several Nova services, the api endpoint and also cert, scheduler, consoleauth and conductor. The service metadata is not provided on the control node
  • Neutron, or to be exact only the neutron server

The Neutron server on the control node does not forward any packets from or to VMs. The Neutron server is the central component to steer all other Neutron components in the cloud. The communication between the central Neutron server and the other components is using the message bus rabbitmq.

A secure cloud should authenticate and encrypt all internal communication. This is possible for all OpenStack components and rabbitmq and works well. Connections to mariadb might use encryption (not tested), kvm console access (originating from nova-spiceproxy) might also be encrypted (this failed).

I don’t care about ceilometer. Do not deploy ceilometer and it’s database on the control node, both software services have high requirements for CPU and disk I/O!

The API Node

The API Node is the host, which provides the API access for the tenants to interact with the cloud management components. The following components are used:

  • apache2, the web server
  • memcache used by software, which is deployed on the host
  • bind9 used by designate as the backend. The bind server is also used by tenants to resolve DNS queries.
  • the dashboard provides users a graphical interface to access and provision cloud resources.
  • the nova-spiceproxy is the front end for the graphical console of VMs on compute nodes (you may also use the vnc proxy).
  • in a secure set up, the tenants should not be able to contact the API endpoints on the control node directly. A proxy components should handle the connections between the tenants and the API endpoints. An apache reverse proxy set up might be used.

A Compute Node

The number of services needed on a compute node is low

  • The virtual switch openvswitch
  • The Linux virtualization software kvm (qemu)
  • The Nova component nova-compute to manage kvm
  • The Neutron component neutron-openvswitch-agent to manage the openvswitch

The basic networking set up of a compute node is discussed below.

A Network Node

The set up of a network node requires the following services:

  • The virtual switch openvswitch
  • The Neutron component neutron-openvswitch-agent to manage the openvswitch
  • The Neutron component neutron-L3-agent provides L3 tenant routers. By using Linux network namespaces many independent routers can be deployed on a network node
  • The Neutron component neutron-dhcp-agent provides the DHCP servers for all networks
  • The Neutron component neutron-metadata-agent provides the web service proxy used by VMs to get VM metadata from the nova metadata service.
  • The Nova component nova-metadata provides access to the nova metadata service for the neutron metadata proxy. Default installations are running the Nova metadata service on the control node. Ensure, that mariadb has enough TCP sessions for database connections available. The default 256 are quite low…
  • The load balancer haproxy used for lbaas
  • The Neutron component neutron-lbaas-agent to manage haproxy

The basic networking set up of a compute node is discussed below.

The cloud network offerings

For this deployment we assume, that the cloud provides the following network offerings for tenants:

  • per tenant routing (the tenant attaches a router to a floating pools and connects his internal networks to the router)
  • multiple (two in this example) floating pools to attach tenant routers, the gateway of the floating pools is a physical router
  • multiple (two in this example) flat networks (shared by all tenants), where the router for the networks is a physical router. Only VMs can be attached to the networks

We assume, that the tenant networks are encapsulated in VXLAN.

Continue reading (part 2)

Updated: 17/01/2021 — 13:17