Learn OpenStack Networking concepts, architecture, and basic and
advanced neutron
and nova
command-line interface (CLI) commands.
The Networking service, code-named neutron, provides an API that lets you define network connectivity and addressing in the cloud. The Networking service enables operators to leverage different networking technologies to power their cloud networking. The Networking service also provides an API to configure and manage a variety of network services ranging from L3 forwarding and NAT to load balancing, edge firewalls, and IPsec VPN.
For a detailed description of the Networking API abstractions and their attributes, see the OpenStack Networking API v2.0 Reference.
If you use the Networking service, do not run the Compute
nova-network
service (like you do in traditional Compute deployments).
When you configure networking, see the Compute-related topics in this
Networking section.
Networking is a virtual network service that provides a powerful API to define the network connectivity and IP addressing that devices from other services, such as Compute, use.
The Compute API has a virtual server abstraction to describe computing resources. Similarly, the Networking API has virtual network, subnet, and port abstractions to describe networking resources.
Resource |
Description |
---|---|
Network |
An isolated L2 segment, analogous to VLAN in the physical networking world. |
Subnet |
A block of v4 or v6 IP addresses and associated configuration state. |
Port |
A connection point for attaching a single device, such as the NIC of a virtual server, to a virtual network. Also describes the associated network configuration, such as the MAC and IP addresses to be used on that port. |
Networking resources
To configure rich network topologies, you can create and configure networks and subnets and instruct other OpenStack services like Compute to attach virtual devices to ports on these networks.
In particular, Networking supports each project having multiple private networks and enables projects to choose their own IP addressing scheme, even if those IP addresses overlap with those that other projects use.
The Networking service:
Enables advanced cloud networking use cases, such as building multi-tiered web applications and enabling migration of applications to the cloud without changing IP addresses.
Offers flexibility for administrators to customize network offerings.
Enables developers to extend the Networking API. Over time, the extended functionality becomes part of the core Networking API.
OpenStack Networking supports SSL for the Networking API server. By
default, SSL is disabled but you can enable it in the neutron.conf
file.
Set these options to configure SSL:
use_ssl = True
Enables SSL on the networking API server.
ssl_cert_file = PATH_TO_CERTFILE
Certificate file that is used when you securely start the Networking API server.
ssl_key_file = PATH_TO_KEYFILE
Private key file that is used when you securely start the Networking API server.
ssl_ca_file = PATH_TO_CAFILE
Optional. CA certificate file that is used when you securely start the Networking API server. This file verifies connecting clients. Set this option when API clients must authenticate to the API server by using SSL certificates that are signed by a trusted CA.
tcp_keepidle = 600
The value of TCP_KEEPIDLE, in seconds, for each server socket when starting the API server. Not supported on OS X.
retry_until_window = 30
Number of seconds to keep retrying to listen.
backlog = 4096
Number of backlog requests with which to configure the socket.
Load-Balancer-as-a-Service (LBaaS) enables Networking to distribute incoming requests evenly among designated instances. This distribution ensures that the workload is shared predictably among instances and enables more effective use of system resources. Use one of these load balancing methods to distribute incoming requests:
Rotates requests evenly between multiple instances.
Requests from a unique source IP address are consistently directed to the same instance.
Allocates requests to the instance with the least number of active connections.
Feature |
Description |
---|---|
Monitors |
LBaaS provides availability monitoring with the
|
Management |
LBaaS is managed using a variety of tool sets.
The REST API is available for programmatic
administration and scripting. Users perform
administrative management of load balancers
through either the CLI ( |
Connection limits |
Ingress traffic can be shaped with connection limits. This feature allows workload control, and can also assist with mitigating DoS (Denial of Service) attacks. |
Session persistence |
LBaaS supports session persistence by ensuring incoming requests are routed to the same instance within a pool of multiple instances. LBaaS supports routing decisions based on cookies and source IP address. |
For information on Firewall-as-a-Service (FWaaS), please consult the Networking Guide.
Allowed-address-pairs
enables you to specify
mac_address and ip_address(cidr) pairs that pass through a port regardless
of subnet. This enables the use of protocols such as VRRP, which floats
an IP address between two instances to enable fast data plane failover.
Currently, only the ML2, Open vSwitch, and VMware NSX plug-ins support the allowed-address-pairs extension.
Basic allowed-address-pairs operations.
Create a port with a specified allowed address pair:
$ neutron port-create net1 --allowed-address-pairs type=dict \ list=true mac_address=MAC_ADDRESS,ip_address=IP_CIDR
Update a port by adding allowed address pairs:
$ neutron port-update PORT_UUID --allowed-address-pairs type=dict \ list=true mac_address=MAC_ADDRESS,ip_address=IP_CIDR
The VPNaaS extension enables OpenStack projects to extend private networks across the internet.
VPNaas is a service. It is a parent object that associates a VPN with a specific subnet and router. Only one VPN service object can be created for each router and each subnet. However, each VPN service object can have any number of IP security connections.
The Internet Key Exchange (IKE) policy specifies the authentication and encryption algorithms to use during phase one and two negotiation of a VPN connection. The IP security policy specifies the authentication and encryption algorithm and encapsulation mode to use for the established VPN connection. Note that you cannot update the IKE and IPSec parameters for live tunnels.
You can set parameters for site-to-site IPsec connections, including peer CIDRs, MTU, authentication mode, peer address, DPD settings, and status.
The current implementation of the VPNaaS extension provides:
Site-to-site VPN that connects two private networks.
Multiple VPN connections per project.
IKEv1 policy support with 3des, aes-128, aes-256, or aes-192 encryption.
IPSec policy support with 3des, aes-128, aes-192, or aes-256 encryption, sha1 authentication, ESP, AH, or AH-ESP transform protocol, and tunnel or transport mode encapsulation.
Dead Peer Detection (DPD) with hold, clear, restart, disabled, or restart-by-peer actions.
The VPNaaS driver plugin can be configured in the neutron configuration file. You can then enable the service.
Before you deploy Networking, it is useful to understand the Networking services and how they interact with the OpenStack components.
Networking is a standalone component in the OpenStack modular architecture. It is positioned alongside OpenStack components such as Compute, Image service, Identity, or Dashboard. Like those components, a deployment of Networking often involves deploying several services to a variety of hosts.
The Networking server uses the neutron-server daemon to expose the Networking API and enable administration of the configured Networking plug-in. Typically, the plug-in requires access to a database for persistent storage (also similar to other OpenStack services).
If your deployment uses a controller host to run centralized Compute components, you can deploy the Networking server to that same host. However, Networking is entirely standalone and can be deployed to a dedicated host. Depending on your configuration, Networking can also include the following agents:
Agent |
Description |
---|---|
plug-in agent
( |
Runs on each hypervisor to perform local vSwitch configuration. The agent that runs, depends on the plug-in that you use. Certain plug-ins do not require an agent. |
dhcp agent
( |
Provides DHCP services to project networks. Required by certain plug-ins. |
l3 agent
( |
Provides L3/NAT forwarding to provide external network access for VMs on project networks. Required by certain plug-ins. |
metering agent
( |
Provides L3 traffic metering for project networks. |
These agents interact with the main neutron process through RPC (for example, RabbitMQ or Qpid) or through the standard Networking API. In addition, Networking integrates with OpenStack components in a number of ways:
Networking relies on the Identity service (keystone) for the authentication and authorization of all API requests.
Compute (nova) interacts with Networking through calls to its
standard API. As part of creating a VM, the nova-compute
service
communicates with the Networking API to plug each virtual NIC on the
VM into a particular network.
The dashboard (horizon) integrates with the Networking API, enabling administrators and project users to create and manage network services through a web-based GUI.
OpenStack Networking uses the NSX plug-in to integrate with an existing VMware vCenter deployment. When installed on the network nodes, the NSX plug-in enables a NSX controller to centrally manage configuration settings and push them to managed network nodes. Network nodes are considered managed when they are added as hypervisors to the NSX controller.
The diagrams below depict some VMware NSX deployment examples. The first diagram illustrates the traffic flow between VMs on separate Compute nodes, and the second diagram between two VMs on a single compute node. Note the placement of the VMware NSX plug-in and the neutron-server service on the network node. The green arrow indicates the management relationship between the NSX controller and the network node.
For configurations options, see Networking configuration options in Configuration Reference. These sections explain how to configure specific plug-ins.
Edit the /etc/neutron/neutron.conf
file and add this line:
core_plugin = bigswitch
In the /etc/neutron/neutron.conf
file, set the service_plugins
option:
service_plugins = neutron.plugins.bigswitch.l3_router_plugin.L3RestProxy
Edit the /etc/neutron/plugins/bigswitch/restproxy.ini
file for the
plug-in and specify a comma-separated list of controller_ip:port pairs:
server = CONTROLLER_IP:PORT
For database configuration, see Install Networking Services in the Installation Tutorials and Guides. (The link defaults to the Ubuntu version.)
Restart the neutron-server
to apply the settings:
# service neutron-server restart
Install the Brocade-modified Python netconf client (ncclient) library, which is available at https://github.com/brocade/ncclient:
$ git clone https://github.com/brocade/ncclient
As root, run this command:
# cd ncclient;python setup.py install
Edit the /etc/neutron/neutron.conf
file and set the following
option:
core_plugin = brocade
Edit the /etc/neutron/plugins/brocade/brocade.ini
file for the
Brocade plug-in and specify the admin user name, password, and IP
address of the Brocade switch:
[SWITCH]
username = ADMIN
password = PASSWORD
address = SWITCH_MGMT_IP_ADDRESS
ostype = NOS
For database configuration, see Install Networking Services in any of the Installation Tutorials and Guides in the OpenStack Documentation index. (The link defaults to the Ubuntu version.)
Restart the neutron-server
service to apply the settings:
# service neutron-server restart
The instructions in this section refer to the VMware NSX-mh platform, formerly known as Nicira NVP.
Install the NSX plug-in:
# apt-get install neutron-plugin-vmware
Edit the /etc/neutron/neutron.conf
file and set this line:
core_plugin = vmware
Example neutron.conf
file for NSX-mh integration:
core_plugin = vmware
rabbit_host = 192.168.203.10
allow_overlapping_ips = True
To configure the NSX-mh controller cluster for OpenStack Networking,
locate the [default]
section in the
/etc/neutron/plugins/vmware/nsx.ini
file and add the following
entries:
To establish and configure the connection with the controller cluster you must set some parameters, including NSX-mh API endpoints, access credentials, and optionally specify settings for HTTP timeouts, redirects and retries in case of connection failures:
nsx_user = ADMIN_USER_NAME
nsx_password = NSX_USER_PASSWORD
http_timeout = HTTP_REQUEST_TIMEOUT # (seconds) default 75 seconds
retries = HTTP_REQUEST_RETRIES # default 2
redirects = HTTP_REQUEST_MAX_REDIRECTS # default 2
nsx_controllers = API_ENDPOINT_LIST # comma-separated list
To ensure correct operations, the nsx_user
user must have
administrator credentials on the NSX-mh platform.
A controller API endpoint consists of the IP address and port for the controller; if you omit the port, port 443 is used. If multiple API endpoints are specified, it is up to the user to ensure that all these endpoints belong to the same controller cluster. The OpenStack Networking VMware NSX-mh plug-in does not perform this check, and results might be unpredictable.
When you specify multiple API endpoints, the plug-in takes care of load balancing requests on the various API endpoints.
The UUID of the NSX-mh transport zone that should be used by default when a project creates a network. You can get this value from the Transport Zones page for the NSX-mh manager:
Alternatively the transport zone identifier can be retrieved by query
the NSX-mh API: /ws.v1/transport-zone
default_tz_uuid = TRANSPORT_ZONE_UUID
default_l3_gw_service_uuid = GATEWAY_SERVICE_UUID
Ubuntu packaging currently does not update the neutron init
script to point to the NSX-mh configuration file. Instead, you
must manually update /etc/default/neutron-server
to add this
line:
NEUTRON_PLUGIN_CONFIG = /etc/neutron/plugins/vmware/nsx.ini
For database configuration, see Install Networking Services in the Installation Tutorials and Guides.
Restart neutron-server
to apply settings:
# service neutron-server restart
The neutron NSX-mh plug-in does not implement initial re-synchronization of Neutron resources. Therefore resources that might already exist in the database when Neutron is switched to the NSX-mh plug-in will not be created on the NSX-mh backend upon restart.
Example nsx.ini
file:
[DEFAULT]
default_tz_uuid = d3afb164-b263-4aaa-a3e4-48e0e09bb33c
default_l3_gw_service_uuid=5c8622cc-240a-40a1-9693-e6a5fca4e3cf
nsx_user=admin
nsx_password=changeme
nsx_controllers=10.127.0.100,10.127.0.200:8888
To debug nsx.ini
configuration issues, run this command from the
host that runs neutron-server:
# neutron-check-nsx-config PATH_TO_NSX.INI
This command tests whether neutron-server
can log into all of the
NSX-mh controllers and the SQL server, and whether all UUID values
are correct.
Edit the /etc/neutron/neutron.conf
file and set this line:
core_plugin = plumgrid
Edit the [PLUMgridDirector] section in the
/etc/neutron/plugins/plumgrid/plumgrid.ini
file and specify the IP
address, port, admin user name, and password of the PLUMgrid Director:
[PLUMgridDirector]
director_server = "PLUMgrid-director-ip-address"
director_server_port = "PLUMgrid-director-port"
username = "PLUMgrid-director-admin-username"
password = "PLUMgrid-director-admin-password"
For database configuration, see Install Networking Services in the Installation Tutorials and Guides.
Restart the neutron-server
service to apply the settings:
# service neutron-server restart
Plug-ins typically have requirements for particular software that must
be run on each node that handles data packets. This includes any node
that runs nova-compute and nodes that run dedicated OpenStack Networking
service agents such as neutron-dhcp-agent
, neutron-l3-agent
,
neutron-metering-agent
or neutron-lbaasv2-agent
.
A data-forwarding node typically has a network interface with an IP address on the management network and another interface on the data network.
This section shows you how to install and configure a subset of the
available plug-ins, which might include the installation of switching
software (for example, Open vSwitch
) and as agents used to communicate
with the neutron-server
process running elsewhere in the data center.
If you use the NSX plug-in, you must also install Open vSwitch on each data-forwarding node. However, you do not need to install an additional agent on each node.
It is critical that you run an Open vSwitch version that is compatible with the current version of the NSX Controller software. Do not use the Open vSwitch version that is installed by default on Ubuntu. Instead, use the Open vSwitch version that is provided on the VMware support portal for your NSX Controller version.
To set up each node for the NSX plug-in
Ensure that each data-forwarding node has an IP address on the management network, and an IP address on the data network that is used for tunneling data traffic. For full details on configuring your forwarding node, see the NSX Administration Guide.
Use the NSX Administrator Guide to add the node as a Hypervisor
by using the NSX Manager GUI. Even if your forwarding node has no
VMs and is only used for services agents like neutron-dhcp-agent
or neutron-lbaas-agent
, it should still be added to NSX as a
Hypervisor.
After following the NSX Administrator Guide, use the page for this
Hypervisor in the NSX Manager GUI to confirm that the node is properly
connected to the NSX Controller Cluster and that the NSX Controller
Cluster can see the br-int
integration bridge.
The DHCP service agent is compatible with all existing plug-ins and is required for all deployments where VMs should automatically receive IP addresses through DHCP.
To install and configure the DHCP agent
You must configure the host running the neutron-dhcp-agent as a data forwarding node according to the requirements for your plug-in.
Install the DHCP agent:
# apt-get install neutron-dhcp-agent
Update any options in the /etc/neutron/dhcp_agent.ini
file
that depend on the plug-in in use. See the sub-sections.
If you reboot a node that runs the DHCP agent, you must run the
neutron-ovs-cleanup
command before the neutron-dhcp-agent
service starts.
On Red Hat, SUSE, and Ubuntu based systems, the
neutron-ovs-cleanup
service runs the neutron-ovs-cleanup
command automatically. However, on Debian-based systems, you
must manually run this command or write your own system script
that runs on boot before the neutron-dhcp-agent
service starts.
Networking dhcp-agent can use
dnsmasq driver which
supports stateful and stateless DHCPv6 for subnets created with
--ipv6_address_mode
set to dhcpv6-stateful
or
dhcpv6-stateless
.
For example:
$ openstack subnet create --ip-version 6 --ipv6-ra-mode dhcpv6-stateful \ --ipv6-address-mode dhcpv6-stateful --network NETWORK --subnet-range \ CIDR SUBNET_NAME
$ openstack subnet create --ip-version 6 --ipv6-ra-mode dhcpv6-stateless \ --ipv6-address-mode dhcpv6-stateless --network NETWORK --subnet-range \ CIDR SUBNET_NAME
If no dnsmasq process for subnet's network is launched, Networking will
launch a new one on subnet's dhcp port in qdhcp-XXX
namespace. If
previous dnsmasq process is already launched, restart dnsmasq with a new
configuration.
Networking will update dnsmasq process and restart it when subnet gets updated.
For dhcp-agent to operate in IPv6 mode use at least dnsmasq v2.63.
After a certain, configured timeframe, networks uncouple from DHCP agents when the agents are no longer in use. You can configure the DHCP agent to automatically detach from a network when the agent is out of service, or no longer needed.
This feature applies to all plug-ins that support DHCP scaling. For more information, see the DHCP agent configuration options listed in the OpenStack Configuration Reference.
These DHCP agent options are required in the
/etc/neutron/dhcp_agent.ini
file for the OVS plug-in:
[DEFAULT]
enable_isolated_metadata = True
interface_driver = openvswitch
These DHCP agent options are required in the
/etc/neutron/dhcp_agent.ini
file for the NSX plug-in:
[DEFAULT]
enable_metadata_network = True
enable_isolated_metadata = True
interface_driver = openvswitch
These DHCP agent options are required in the
/etc/neutron/dhcp_agent.ini
file for the Linux-bridge plug-in:
[DEFAULT]
enabled_isolated_metadata = True
interface_driver = linuxbridge
The OpenStack Networking service has a widely used API extension to allow administrators and projects to create routers to interconnect L2 networks, and floating IPs to make ports on private networks publicly accessible.
Many plug-ins rely on the L3 service agent to implement the L3 functionality. However, the following plug-ins already have built-in L3 capabilities:
Big Switch/Floodlight plug-in, which supports both the open source Floodlight controller and the proprietary Big Switch controller.
Only the proprietary BigSwitch controller implements L3 functionality. When using Floodlight as your OpenFlow controller, L3 functionality is not available.
IBM SDN-VE plug-in
MidoNet plug-in
NSX plug-in
PLUMgrid plug-in
Do not configure or use neutron-l3-agent
if you use one of these
plug-ins.
To install the L3 agent for all other plug-ins
Install the neutron-l3-agent
binary on the network node:
# apt-get install neutron-l3-agent
To uplink the node that runs neutron-l3-agent
to the external network,
create a bridge named br-ex
and attach the NIC for the external
network to this bridge.
For example, with Open vSwitch and NIC eth1 connected to the external network, run:
# ovs-vsctl add-br br-ex # ovs-vsctl add-port br-ex eth1
When the br-ex
port is added to the eth1
interface, external
communication is interrupted. To avoid this, edit the
/etc/network/interfaces
file to contain the following information:
## External bridge
auto br-ex
iface br-ex inet static
address 192.27.117.101
netmask 255.255.240.0
gateway 192.27.127.254
dns-nameservers 8.8.8.8
## External network interface
auto eth1
iface eth1 inet manual
up ifconfig $IFACE 0.0.0.0 up
up ip link set $IFACE promisc on
down ip link set $IFACE promisc off
down ifconfig $IFACE down
The external bridge configuration address is the external IP address.
This address and gateway should be configured in
/etc/network/interfaces
.
After editing the configuration, restart br-ex
:
# ifdown br-ex && ifup br-ex
Do not manually configure an IP address on the NIC connected to the
external network for the node running neutron-l3-agent
. Rather, you
must have a range of IP addresses from the external network that can be
used by OpenStack Networking for routers that uplink to the external
network. This range must be large enough to have an IP address for each
router in the deployment, as well as each floating IP.
The neutron-l3-agent
uses the Linux IP stack and iptables to perform L3
forwarding and NAT. In order to support multiple routers with
potentially overlapping IP addresses, neutron-l3-agent
defaults to
using Linux network namespaces to provide isolated forwarding contexts.
As a result, the IP addresses of routers are not visible simply by running
the ip addr list
or ifconfig
command on the node.
Similarly, you cannot directly ping
fixed IPs.
To do either of these things, you must run the command within a
particular network namespace for the router. The namespace has the name
qrouter-ROUTER_UUID
. These example commands run in the router
namespace with UUID 47af3868-0fa8-4447-85f6-1304de32153b:
# ip netns exec qrouter-47af3868-0fa8-4447-85f6-1304de32153b ip addr list
# ip netns exec qrouter-47af3868-0fa8-4447-85f6-1304de32153b ping FIXED_IP
If you reboot a node that runs the L3 agent, you must run the
neutron-ovs-cleanup
command before the neutron-l3-agent
service starts.
On Red Hat, SUSE and Ubuntu based systems, the neutron-ovs-cleanup
service runs the neutron-ovs-cleanup
command
automatically. However, on Debian-based systems, you must manually
run this command or write your own system script that runs on boot
before the neutron-l3-agent service starts.
How routers are assigned to L3 agents
By default, a router is assigned to the L3 agent with the least number
of routers (LeastRoutersScheduler). This can be changed by altering the
router_scheduler_driver
setting in the configuration file.
The Neutron Metering agent resides beside neutron-l3-agent.
To install the metering agent and configure the node
Install the agent by running:
# apt-get install neutron-metering-agent
If you use one of the following plug-ins, you need to configure the metering agent with these lines as well:
An OVS-based plug-in such as OVS, NSX, NEC, BigSwitch/Floodlight:
interface_driver = openvswitch
A plug-in that uses LinuxBridge:
interface_driver = linuxbridge
To use the reference implementation, you must set:
driver = neutron.services.metering.drivers.iptables.iptables_driver
.IptablesMeteringDriver
Set the service_plugins
option in the /etc/neutron/neutron.conf
file on the host that runs neutron-server
:
service_plugins = metering
If this option is already defined, add metering
to the list, using a
comma as separator. For example:
service_plugins = router,metering
For the back end, use either Octavia or HAProxy. This example uses Octavia.
To configure LBaaS V2
Install Octavia using your distribution's package manager.
Edit the /etc/neutron/neutron_lbaas.conf
file and change
the service_provider
parameter to enable Octavia:
service_provider = LOADBALANCERV2:Octavia:neutron_lbaas.
drivers.octavia.driver.OctaviaDriver:default
Edit the /etc/neutron/neutron.conf
file and add the
service_plugins
parameter to enable the load-balancing plug-in:
service_plugins = neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2
If this option is already defined, add the load-balancing plug-in to the list using a comma as a separator. For example:
service_plugins = [already defined plugins],neutron_lbaas.services.loadbalancer.plugin.LoadBalancerPluginv2
Create the required tables in the database:
# neutron-db-manage --subproject neutron-lbaas upgrade head
Restart the neutron-server
service.
Enable load balancing in the Project section of the dashboard.
Horizon panels are enabled only for LBaaSV1. LBaaSV2 panels are still being developed.
By default, the enable_lb
option is True
in the local_settings.py
file.
OPENSTACK_NEUTRON_NETWORK = {
'enable_lb': True,
...
}
Apply the settings by restarting the web server. You can now view the Load Balancer management options in the Project view in the dashboard.
Before you install the OpenStack Networking Hyper-V L2 agent on a Hyper-V compute node, ensure the compute node has been configured correctly using these instructions.
To install the OpenStack Networking Hyper-V agent and configure the node
Download the OpenStack Networking code from the repository:
> cd C:\OpenStack\ > git clone https://git.openstack.org/openstack/neutron
Install the OpenStack Networking Hyper-V Agent:
> cd C:\OpenStack\neutron\ > python setup.py install
Copy the policy.json
file:
> xcopy C:\OpenStack\neutron\etc\policy.json C:\etc\
Create the C:\etc\neutron-hyperv-agent.conf
file and add the proper
configuration options and the Hyper-V related
options. Here is a sample config file:
[DEFAULT]
control_exchange = neutron
policy_file = C:\etc\policy.json
rpc_backend = neutron.openstack.common.rpc.impl_kombu
rabbit_host = IP_ADDRESS
rabbit_port = 5672
rabbit_userid = guest
rabbit_password = <password>
logdir = C:\OpenStack\Log
logfile = neutron-hyperv-agent.log
[AGENT]
polling_interval = 2
physical_network_vswitch_mappings = *:YOUR_BRIDGE_NAME
enable_metrics_collection = true
[SECURITYGROUP]
firewall_driver = hyperv.neutron.security_groups_driver.
HyperVSecurityGroupsDriver
enable_security_group = true
Start the OpenStack Networking Hyper-V agent:
> C:\Python27\Scripts\neutron-hyperv-agent.exe --config-file C:\etc\neutron-hyperv-agent.conf
This table shows examples of Networking commands that enable you to complete basic operations on agents.
Operation |
Command |
---|---|
List all available agents. |
|
Show information of a given agent. |
|
Update the admin status and description for a specified agent. The
command can be used to enable and disable agents by using
|
|
Delete a given agent. Consider disabling the agent before deletion. |
|
Basic operations on Networking agents
See the OpenStack Command-Line Interface Reference for more information on Networking commands.
To configure the Identity service for use with Networking
Create the get_id()
function
The get_id()
function stores the ID of created objects, and removes
the need to copy and paste object IDs in later steps:
Add the following function to your .bashrc
file:
function get_id () {
echo `"$@" | awk '/ id / { print $4 }'`
}
Source the .bashrc
file:
$ source .bashrc
Create the Networking service entry
Networking must be available in the Compute service catalog. Create the service:
$ NEUTRON_SERVICE_ID=$(get_id openstack service create network \ --name neutron --description 'OpenStack Networking Service')
Create the Networking service endpoint entry
The way that you create a Networking endpoint entry depends on whether you are using the SQL or the template catalog driver:
If you are using the SQL driver
, run the following command with the
specified region ($REGION
), IP address of the Networking server
($IP
), and service ID ($NEUTRON_SERVICE_ID
, obtained in the
previous step).
$ openstack endpoint create $NEUTRON_SERVICE_ID --region $REGION \ --publicurl 'http://$IP:9696/' --adminurl 'http://$IP:9696/' \ --internalurl 'http://$IP:9696/'
For example:
$ openstack endpoint create $NEUTRON_SERVICE_ID --region myregion \ --publicurl "http://10.211.55.17:9696/" \ --adminurl "http://10.211.55.17:9696/" \ --internalurl "http://10.211.55.17:9696/"
If you are using the template driver
, specify the following
parameters in your Compute catalog template file
(default_catalog.templates
), along with the region ($REGION
)
and IP address of the Networking server ($IP
).
catalog.$REGION.network.publicURL = http://$IP:9696
catalog.$REGION.network.adminURL = http://$IP:9696
catalog.$REGION.network.internalURL = http://$IP:9696
catalog.$REGION.network.name = Network Service
For example:
catalog.$Region.network.publicURL = http://10.211.55.17:9696
catalog.$Region.network.adminURL = http://10.211.55.17:9696
catalog.$Region.network.internalURL = http://10.211.55.17:9696
catalog.$Region.network.name = Network Service
Create the Networking service user
You must provide admin user credentials that Compute and some internal
Networking components can use to access the Networking API. Create a
special service
project and a neutron
user within this project,
and assign an admin
role to this role.
Create the admin
role:
$ ADMIN_ROLE=$(get_id openstack role create admin)
Create the neutron
user:
$ NEUTRON_USER=$(get_id openstack user create neutron \ --password "$NEUTRON_PASSWORD" --email demo@example.com \ --project service)
Create the service
project:
$ SERVICE_TENANT=$(get_id openstack project create service \ --description "Services project" --domain default)
Establish the relationship among the project, user, and role:
$ openstack role add $ADMIN_ROLE --user $NEUTRON_USER \ --project $SERVICE_TENANT
For information about how to create service entries and users, see the Newton Installation Tutorials and Guides for your distribution.
If you use Networking, do not run the Compute nova-network
service (like
you do in traditional Compute deployments). Instead, Compute delegates
most network-related decisions to Networking.
Uninstall nova-network
and reboot any physical nodes that have been
running nova-network
before using them to run Networking.
Inadvertently running the nova-network
process while using
Networking can cause problems, as can stale iptables rules pushed
down by previously running nova-network
.
Compute proxies project-facing API calls to manage security groups and
floating IPs to Networking APIs. However, operator-facing tools such
as nova-manage
, are not proxied and should not be used.
When you configure networking, you must use this guide. Do not rely
on Compute networking documentation or past experience with Compute.
If a nova
command or configuration option related to networking
is not mentioned in this guide, the command is probably not
supported for use with Networking. In particular, you cannot use CLI
tools like nova-manage
and nova
to manage networks or IP
addressing, including both fixed and floating IPs, with Networking.
To ensure that Compute works properly with Networking (rather than the
legacy nova-network
mechanism), you must adjust settings in the
nova.conf
configuration file.
Each time you provision or de-provision a VM in Compute, nova-\*
services communicate with Networking using the standard API. For this to
happen, you must configure the following items in the nova.conf
file
(used by each nova-compute
and nova-api
instance).
Attribute name |
Required |
---|---|
|
Modify from the default to |
|
Update to the host name/IP and port of the neutron-server instance for this deployment. |
|
Keep the default |
|
Update to the name of the service tenant created in the above section on Identity configuration. |
|
Update to the name of the user created in the above section on Identity configuration. |
|
Update to the password of the user created in the above section on Identity configuration. |
|
Update to the Identity server IP and port. This is the Identity (keystone) admin API server IP and port value, and not the Identity service API IP and port. |
Attribute name |
Required |
---|---|
|
Modify from the default to |
|
Update to the host name/IP and port of the neutron-server instance for this deployment. |
|
Keep the default |
|
Update to the name of the service tenant created in the above section on Identity configuration. |
|
Update to the name of the user created in the above section on Identity configuration. |
|
Update to the password of the user created in the above section on Identity configuration. |
|
Update to the Identity server IP and port. This is the Identity (keystone) admin API server IP and port value, and not the Identity service API IP and port. |
The Networking service provides security group functionality using a mechanism that is more flexible and powerful than the security group capabilities built into Compute. Therefore, if you use Networking, you should always disable built-in security groups and proxy all security group calls to the Networking API. If you do not, security policies will conflict by being simultaneously applied by both services.
To proxy security groups to Networking, use the following configuration
values in the nova.conf
file:
nova.conf security group settings
Item |
Configuration |
---|---|
|
Update to |
The Compute service allows VMs to query metadata associated with a VM by making a web request to a special 169.254.169.254 address. Networking supports proxying those requests to nova-api, even when the requests are made from isolated networks, or from multiple networks that use overlapping IP addresses.
To enable proxying the requests, you must update the following fields in
[neutron]
section in the nova.conf
.
nova.conf metadata settings
Item |
Configuration |
---|---|
|
Update to |
|
Update to a string "password" value.
You must also configure the same value in
the The default value of an empty string in both files will allow metadata to function, but will not be secure if any non-trusted entities have access to the metadata APIs exposed by nova-api. |
As a precaution, even when using metadata_proxy_shared_secret
,
we recommend that you do not expose metadata using the same
nova-api instances that are used for projects. Instead, you should
run a dedicated set of nova-api instances for metadata that are
available only on your management network. Whether a given nova-api
instance exposes metadata APIs is determined by the value of
enabled_apis
in its nova.conf
.
Example values for the above settings, assuming a cloud controller node running Compute and Networking with an IP address of 192.168.1.2:
[DEFAULT]
use_neutron = True
firewall_driver=nova.virt.firewall.NoopFirewallDriver
[neutron]
url=http://192.168.1.2:9696
auth_strategy=keystone
admin_tenant_name=service
admin_username=neutron
admin_password=password
admin_auth_url=http://192.168.1.2:35357/v2.0
service_metadata_proxy=true
metadata_proxy_shared_secret=foo
This section describes advanced configuration options for various system
components. For example, configuration options where the default works
but that the user wants to customize options. After installing from
packages, $NEUTRON_CONF_DIR
is /etc/neutron
.
You can run an L3 metering agent that enables layer-3 traffic metering. In general, you should launch the metering agent on all nodes that run the L3 agent:
$ neutron-metering-agent --config-file NEUTRON_CONFIG_FILE \ --config-file L3_METERING_CONFIG_FILE
You must configure a driver that matches the plug-in that runs on the service. The driver adds metering to the routing interface.
Option |
Value |
---|---|
Open vSwitch | |
interface_driver ($NEUTRON_CONF_DIR/metering_agent.ini) |
openvswitch |
Linux Bridge | |
interface_driver ($NEUTRON_CONF_DIR/metering_agent.ini) |
linuxbridge |
You must configure any driver that implements the metering abstraction. Currently the only available implementation uses iptables for metering.
driver = neutron.services.metering.drivers.
iptables.iptables_driver.IptablesMeteringDriver
To enable L3 metering, you must set the following option in the
neutron.conf
file on the host that runs neutron-server
:
service_plugins = metering
This section is fully described at the High-availability for DHCP in the Networking Guide.
You can manage OpenStack Networking services by using the service command. For example:
# service neutron-server stop # service neutron-server status # service neutron-server start # service neutron-server restart
Log files are in the /var/log/neutron
directory.
Configuration files are in the /etc/neutron
directory.
Administrators and projects can use OpenStack Networking to build rich network topologies. Administrators can create network connectivity on behalf of projects.
After installing and configuring Networking (neutron), projects and
administrators can perform create-read-update-delete (CRUD) API networking
operations. This is performed using the Networking API directly with either
the neutron
command-line interface (CLI) or the openstack
CLI. The neutron
CLI is a wrapper around the Networking API. Every
Networking API call has a corresponding neutron
command.
The openstack
CLI is a common interface for all OpenStack
projects, however, not every API operation has been implemented. For the
list of available commands, see Command List.
The neutron
CLI includes a number of options. For details, see
Create and manage networks.
To learn about advanced capabilities available through the neutron
command-line interface (CLI), read the networking section Create and manage
networks
in the OpenStack End User Guide.
This table shows example openstack
commands that enable you to
complete basic network operations:
Operation |
Command |
---|---|
Creates a network. |
|
Creates a subnet that is associated with net1. |
|
Lists ports for a specified project. |
|
Lists ports for a
specified project
and displays the |
|
Shows information for a specified port. |
|
Basic Networking operations
The device_owner
field describes who owns the port. A port whose
device_owner
begins with:
network
is created by Networking.
compute
is created by Compute.
The administrator can run any openstack
command on behalf of
projects by specifying an Identity project
in the command, as
follows:
$ openstack network create --project PROJECT_ID NETWORK_NAME
For example:
$ openstack network create --project 5e4bbe24b67a4410bc4d9fae29ec394e net1
To view all project IDs in Identity, run the following command as an Identity service admin user:
$ openstack project list
This table shows example CLI commands that enable you to complete advanced network operations:
Operation |
Command |
---|---|
Creates a network that all projects can use. |
|
Creates a subnet with a specified gateway IP address. |
|
Creates a subnet that has no gateway IP address. |
|
Creates a subnet with DHCP disabled. |
|
Specifies a set of host routes |
|
Creates a subnet with a specified set of dns name servers. |
|
Displays all ports and IPs allocated on a network. |
|
Advanced Networking operations
This table shows example openstack
commands that enable you to
complete basic VM networking operations:
Action |
Command |
---|---|
Checks available networks. |
|
Boots a VM with a single NIC on a selected Networking network. |
|
Searches for ports with a
|
|
Searches for ports, but shows
only the |
|
Temporarily disables a port from sending traffic. |
|
Basic Compute and Networking operations
The device_id
can also be a logical router ID.
When you boot a Compute VM, a port on the network that corresponds to the VM NIC is automatically created and associated with the default security group. You can configure security group rules to enable users to access the VM.
This table shows example openstack
commands that enable you to
complete advanced VM creation operations:
Operation |
Command |
---|---|
Boots a VM with multiple NICs. |
|
Boots a VM with a specific IP
address. Note that you cannot
use the |
|
Boots a VM that connects to all
networks that are accessible to the
project who submits the request
(without the |
|
Advanced VM creation operations
Cloud images that distribution vendors offer usually have only one active NIC configured. When you boot with multiple NICs, you must configure additional interfaces on the image or the NICs are not reachable.
The following Debian/Ubuntu-based example shows how to set up the
interfaces within the instance in the /etc/network/interfaces
file. You must apply this configuration to the image.
# The loopback network interface
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet dhcp
auto eth1
iface eth1 inet dhcp
You must configure security group rules depending on the type of plug-in you are using. If you are using a plug-in that:
Implements Networking security groups, you can configure security
group rules directly by using the openstack security group rule create
command. This example enables ping
and ssh
access to your VMs.
$ openstack security group rule create --protocol icmp \ --ingress
$ openstack security group rule create --protocol tcp \ --egress --description "Sample Security Group"
Does not implement Networking security groups, you can configure
security group rules by using the openstack security group rule
create
or euca-authorize
command. These openstack
commands enable ping
and ssh
access to your VMs.
$ openstack security group rule create default --protocol icmp --dst-port -1:-1 --remote-ip 0.0.0.0/0 $ openstack security group rule create default --protocol tcp --dst-port 22:22 --remote-ip 0.0.0.0/0
If your plug-in implements Networking security groups, you can also
leverage Compute security groups by setting
security_group_api = neutron
in the nova.conf
file. After
you set this option, all Compute security group commands are proxied
to Networking.
Several plug-ins implement API extensions that provide capabilities
similar to what was available in nova-network
. These plug-ins are likely
to be of interest to the OpenStack community.
Networks can be categorized as either project networks or provider networks. Project networks are created by normal users and details about how they are physically realized are hidden from those users. Provider networks are created with administrative credentials, specifying the details of how the network is physically realized, usually to match some existing network in the data center.
Provider networks enable administrators to create networks that map directly to the physical networks in the data center. This is commonly used to give projects direct access to a public network that can be used to reach the Internet. It might also be used to integrate with VLANs in the network that already have a defined meaning (for example, enable a VM from the marketing department to be placed on the same VLAN as bare-metal marketing hosts in the same data center).
The provider extension allows administrators to explicitly manage the relationship between Networking virtual networks and underlying physical mechanisms such as VLANs and tunnels. When this extension is supported, Networking client users with administrative privileges see additional provider attributes on all virtual networks and are able to specify these attributes in order to create provider networks.
The provider extension is supported by the Open vSwitch and Linux Bridge plug-ins. Configuration of these plug-ins requires familiarity with this extension.
A number of terms are used in the provider extension and in the configuration of plug-ins supporting the provider extension:
Provider extension terminology
Term |
Description |
---|---|
virtual network |
A Networking L2 network (identified by a UUID and optional name) whose ports can be attached as vNICs to Compute instances and to various Networking agents. The Open vSwitch and Linux Bridge plug-ins each support several different mechanisms to realize virtual networks. |
physical network |
A network connecting virtualization hosts (such as compute nodes) with each other and with other network resources. Each physical network might support multiple virtual networks. The provider extension and the plug-in configurations identify physical networks using simple string names. |
project network |
A virtual network that a project or an administrator creates. The physical details of the network are not exposed to the project. |
provider network |
A virtual network administratively created to map to a specific network in the data center, typically to enable direct access to non-OpenStack resources on that network. Project can be given access to provider networks. |
VLAN network |
A virtual network implemented as packets on a specific physical network containing IEEE 802.1Q headers with a specific VID field value. VLAN networks sharing the same physical network are isolated from each other at L2 and can even have overlapping IP address spaces. Each distinct physical network supporting VLAN networks is treated as a separate VLAN trunk, with a distinct space of VID values. Valid VID values are 1 through 4094. |
flat network |
A virtual network implemented as packets on a specific physical network containing no IEEE 802.1Q header. Each physical network can realize at most one flat network. |
local network |
A virtual network that allows communication within each host, but not across a network. Local networks are intended mainly for single-node test scenarios, but can have other uses. |
GRE network |
A virtual network implemented as network packets encapsulated using GRE. GRE networks are also referred to as tunnels. GRE tunnel packets are routed by the IP routing table for the host, so GRE networks are not associated by Networking with specific physical networks. |
Virtual Extensible LAN (VXLAN) network |
VXLAN is a proposed encapsulation protocol for running an overlay network on existing Layer 3 infrastructure. An overlay network is a virtual network that is built on top of existing network Layer 2 and Layer 3 technologies to support elastic compute architectures. |
The ML2, Open vSwitch, and Linux Bridge plug-ins support VLAN networks, flat networks, and local networks. Only the ML2 and Open vSwitch plug-ins currently support GRE and VXLAN networks, provided that the required features exist in the hosts Linux kernel, Open vSwitch, and iproute2 packages.
The provider extension extends the Networking network resource with these attributes:
Attribute name |
Type |
Default Value |
Description |
---|---|---|---|
provider: network_type |
String |
N/A |
The physical mechanism by which the virtual network is implemented.
Possible values are |
provider: physical_network |
String |
If a physical network named "default" has been configured and
if provider:network_type is |
The name of the physical network over which the virtual network
is implemented for flat and VLAN networks. Not applicable to the
|
provider:segmentation_id |
Integer |
N/A |
For VLAN networks, the VLAN VID on the physical network that
realizes the virtual network. Valid VLAN VIDs are 1 through 4094.
For GRE networks, the tunnel ID. Valid tunnel IDs are any 32 bit
unsigned integer. Not applicable to the |
To view or set provider extended attributes, a client must be authorized
for the extension:provider_network:view
and
extension:provider_network:set
actions in the Networking policy
configuration. The default Networking configuration authorizes both
actions for users with the admin role. An authorized client or an
administrative user can view and set the provider extended attributes
through Networking API calls. See the section called
Section 9.11, “Authentication and authorization” for details on policy configuration.
The Networking API provides abstract L2 network segments that are decoupled from the technology used to implement the L2 network. Networking includes an API extension that provides abstract L3 routers that API users can dynamically provision and configure. These Networking routers can connect multiple L2 Networking networks and can also provide a gateway that connects one or more private L2 networks to a shared external network. For example, a public network for access to the Internet. See the OpenStack Configuration Reference for details on common models of deploying Networking L3 routers.
The L3 router provides basic NAT capabilities on gateway ports that uplink the router to external networks. This router SNATs all traffic by default and supports floating IPs, which creates a static one-to-one mapping from a public IP on the external network to a private IP on one of the other subnets attached to the router. This allows a project to selectively expose VMs on private networks to other hosts on the external network (and often to all hosts on the Internet). You can allocate and map floating IPs from one port to another, as needed.
External networks are visible to all users. However, the default policy settings enable only administrative users to create, update, and delete external networks.
This table shows example neutron commands that enable you to complete basic L3 operations:
Operation |
Command |
---|---|
Creates external networks. |
$ openstack network create public --external $ openstack subnet create --network public --subnet-range 172.16.1.0/24 public-subnet |
Lists external networks. |
$ openstack network list --external |
Creates an internal-only router that connects to multiple L2 networks privately. |
$ openstack network create net1 $ openstack subnet create --network net1 --subnet-range 10.0.0.0/24 subnet1 $ openstack network create net2 $ openstack subnet create --network net2 --subnet-range 10.0.1.0/24 subnet2 $ openstack router create router1 $ openstack router add subnet router1 SUBNET1_UUID $ openstack router add subnet router1 SUBNET2_UUID An internal router port can have only one IPv4 subnet and multiple IPv6 subnets
that belong to the same network ID. When you call |
Connects a router to an external network, which enables that router to act as a NAT gateway for external connectivity. |
$ openstack router set router1 --external-gateway EXT_NET_ID The router obtains an interface with the gateway_ip address of the subnet and this interface is attached to a port on the L2 Networking network associated with the subnet. The router also gets a gateway interface to the specified external network. This provides SNAT connectivity to the external network as well as support for floating IPs allocated on that external networks. Commonly an external network maps to a network in the provider. |
Lists routers. |
$ openstack router list |
Shows information for a specified router. |
$ openstack router show ROUTER_ID |
Shows all internal interfaces for a router. |
$ openstack port list --router ROUTER_ID $ openstack port list --router ROUTER_NAME |
Identifies the PORT_ID that represents the VM NIC to which the floating IP should map. |
$ openstack port list -c ID -c "Fixed IP Addresses" --server INSTANCE_ID This port must be on a Networking subnet that is attached to a router uplinked to the external network used to create the floating IP. Conceptually, this is because the router must be able to perform the Destination NAT (DNAT) rewriting of packets from the floating IP address (chosen from a subnet on the external network) to the internal fixed IP (chosen from a private subnet that is behind the router). |
Creates a floating IP address and associates it with a port. |
$ openstack floating ip create EXT_NET_ID $ openstack floating ip add port FLOATING_IP_ID --port-id INTERNAL_VM_PORT_ID |
Creates a floating IP on a specific subnet in the external network. |
$ openstack floating ip create EXT_NET_ID --subnet SUBNET_ID If there are multiple subnets in the external network, you can choose a specific subnet based on quality and costs. |
Creates a floating IP address and associates it with a port, in a single step. |
$ openstack floating ip create --port INTERNAL_VM_PORT_ID EXT_NET_ID |
Lists floating IPs |
$ openstack floating ip list |
Finds floating IP for a specified VM port. |
$ openstack floating ip list --port INTERNAL_VM_PORT_ID |
Disassociates a floating IP address. |
$ openstack floating ip remove port FLOATING_IP_ID |
Deletes the floating IP address. |
$ openstack floating ip delete FLOATING_IP_ID |
Clears the gateway. |
$ openstack router unset --external-gateway router1 |
Removes the interfaces from the router. |
$ openstack router remove subnet router1 SUBNET_ID If this subnet ID is the last subnet on the port, this operation deletes the port itself. |
Deletes the router. |
$ openstack router delete router1 |
Security groups and security group rules allow administrators and projects to specify the type of traffic and direction (ingress/egress) that is allowed to pass through a port. A security group is a container for security group rules.
When a port is created in Networking it is associated with a security group. If a security group is not specified the port is associated with a 'default' security group. By default, this group drops all ingress traffic and allows all egress. Rules can be added to this group in order to change the behavior.
To use the Compute security group APIs or use Compute to orchestrate the
creation of ports for instances on specific security groups, you must
complete additional configuration. You must configure the
/etc/nova/nova.conf
file and set the security_group_api=neutron
option on every node that runs nova-compute and nova-api. After you make
this change, restart nova-api and nova-compute to pick up this change.
Then, you can use both the Compute and OpenStack Network security group
APIs at the same time.
To use the Compute security group API with Networking, the Networking plug-in must implement the security group API. The following plug-ins currently implement this: ML2, Open vSwitch, Linux Bridge, NEC, and VMware NSX.
You must configure the correct firewall driver in the
securitygroup
section of the plug-in/agent configuration
file. Some plug-ins and agents, such as Linux Bridge Agent and
Open vSwitch Agent, use the no-operation driver as the default,
which results in non-working security groups.
When using the security group API through Compute, security groups are applied to all ports on an instance. The reason for this is that Compute security group APIs are instances based and not port based as Networking.
This table shows example neutron commands that enable you to complete basic security group operations:
Operation |
Command |
---|---|
Creates a security group for our web servers. |
$ openstack security group create webservers \ --description "security group for webservers" |
Lists security groups. |
$ openstack security group list |
Creates a security group rule to allow port 80 ingress. |
$ openstack security group rule create --ingress \ --protocol tcp SECURITY_GROUP_UUID |
Lists security group rules. |
$ openstack security group rule list |
Deletes a security group rule. |
$ openstack security group rule delete SECURITY_GROUP_RULE_UUID |
Deletes a security group. |
$ openstack security group delete SECURITY_GROUP_UUID |
Creates a port and associates two security groups. |
$ openstack port create port1 --security-group SECURITY_GROUP_ID1 \ --security-group SECURITY_GROUP_ID2 --network NETWORK_ID |
Removes security groups from a port. |
$ openstack port set --no-security-group PORT_ID |
The Load-Balancer-as-a-Service (LBaaS) API provisions and configures load balancers. The reference implementation is based on the HAProxy software load balancer.
This list shows example neutron commands that enable you to complete basic LBaaS operations:
Creates a load balancer pool by using specific provider.
--provider
is an optional argument. If not used, the pool is
created with default provider for LBaaS service. You should configure
the default provider in the [service_providers]
section of the
neutron.conf
file. If no default provider is specified for LBaaS,
the --provider
parameter is required for pool creation.
$ neutron lb-pool-create --lb-method ROUND_ROBIN --name mypool \ --protocol HTTP --subnet-id SUBNET_UUID --provider PROVIDER_NAME
Associates two web servers with pool.
$ neutron lb-member-create --address WEBSERVER1_IP --protocol-port 80 mypool $ neutron lb-member-create --address WEBSERVER2_IP --protocol-port 80 mypool
Creates a health monitor that checks to make sure our instances are still running on the specified protocol-port.
$ neutron lb-healthmonitor-create --delay 3 --type HTTP --max-retries 3 \ --timeout 3
Associates a health monitor with pool.
$ neutron lb-healthmonitor-associate HEALTHMONITOR_UUID mypool
Creates a virtual IP (VIP) address that, when accessed through the load balancer, directs the requests to one of the pool members.
$ neutron lb-vip-create --name myvip --protocol-port 80 --protocol \ HTTP --subnet-id SUBNET_UUID mypool
Each vendor can choose to implement additional API extensions to the core API. This section describes the extensions for each plug-in.
These sections explain NSX plug-in extensions.
The VMware NSX QoS extension rate-limits network ports to guarantee a
specific amount of bandwidth for each port. This extension, by default,
is only accessible by a project with an admin role but is configurable
through the policy.json
file. To use this extension, create a queue
and specify the min/max bandwidth rates (kbps) and optionally set the
QoS Marking and DSCP value (if your network fabric uses these values to
make forwarding decisions). Once created, you can associate a queue with
a network. Then, when ports are created on that network they are
automatically created and associated with the specific queue size that
was associated with the network. Because one size queue for a every port
on a network might not be optimal, a scaling factor from the nova flavor
rxtx_factor
is passed in from Compute when creating the port to scale
the queue.
Lastly, if you want to set a specific baseline QoS policy for the amount of bandwidth a single port can use (unless a network queue is specified with the network a port is created on) a default queue can be created in Networking which then causes ports created to be associated with a queue of that size times the rxtx scaling factor. Note that after a network or default queue is specified, queues are added to ports that are subsequently created but are not added to existing ports.
This table shows example neutron commands that enable you to complete basic queue operations:
Operation |
Command |
---|---|
Creates QoS queue (admin-only). |
$ neutron queue-create --min 10 --max 1000 myqueue |
Associates a queue with a network. |
$ neutron net-create network --queue_id QUEUE_ID |
Creates a default system queue. |
$ neutron queue-create --default True --min 10 --max 2000 default |
Lists QoS queues. |
$ neutron queue-list |
Deletes a QoS queue. |
$ neutron queue-delete QUEUE_ID_OR_NAME |
Provider networks can be implemented in different ways by the underlying NSX platform.
The FLAT and VLAN network types use bridged transport connectors.
These network types enable the attachment of large number of ports. To
handle the increased scale, the NSX plug-in can back a single OpenStack
Network with a chain of NSX logical switches. You can specify the
maximum number of ports on each logical switch in this chain on the
max_lp_per_bridged_ls
parameter, which has a default value of 5,000.
The recommended value for this parameter varies with the NSX version running in the back-end, as shown in the following table.
Recommended values for max_lp_per_bridged_ls
NSX version |
Recommended Value |
---|---|
2.x |
64 |
3.0.x |
5,000 |
3.1.x |
5,000 |
3.2.x |
10,000 |
In addition to these network types, the NSX plug-in also supports a special l3_ext network type, which maps external networks to specific NSX gateway services as discussed in the next section.
NSX exposes its L3 capabilities through gateway services which are
usually configured out of band from OpenStack. To use NSX with L3
capabilities, first create an L3 gateway service in the NSX Manager.
Next, in /etc/neutron/plugins/vmware/nsx.ini
set
default_l3_gw_service_uuid
to this value. By default, routers are
mapped to this gateway service.
Create external network and map it to a specific NSX gateway service:
$ openstack network create public --external --provider-network-type l3_ext \ --provider-physical-network L3_GATEWAY_SERVICE_UUID
Terminate traffic on a specific VLAN from a NSX gateway service:
$ openstack network create public --external --provider-network-type l3_ext \ --provider-physical-network L3_GATEWAY_SERVICE_UUID --provider-segment VLAN_ID
Starting with the Havana release, the VMware NSX plug-in provides an asynchronous mechanism for retrieving the operational status for neutron resources from the NSX back-end; this applies to network, port, and router resources.
The back-end is polled periodically and the status for every resource is
retrieved; then the status in the Networking database is updated only
for the resources for which a status change occurred. As operational
status is now retrieved asynchronously, performance for GET
operations is consistently improved.
Data to retrieve from the back-end are divided in chunks in order to avoid expensive API requests; this is achieved leveraging NSX APIs response paging capabilities. The minimum chunk size can be specified using a configuration option; the actual chunk size is then determined dynamically according to: total number of resources to retrieve, interval between two synchronization task runs, minimum delay between two subsequent requests to the NSX back-end.
The operational status synchronization can be tuned or disabled using the configuration options reported in this table; it is however worth noting that the default values work fine in most cases.
Option name |
Group |
Default value |
Type and constraints |
Notes |
---|---|---|---|---|
|
|
10 seconds |
Integer; no constraint. |
Interval in seconds between two run of the synchronization task. If the
synchronization task takes more than |
|
|
0 seconds |
Integer. Must not exceed |
When different from zero, a random delay between 0 and
|
|
|
1 second |
Integer. Must not exceed |
The value of this option can be tuned according to the observed load on the NSX controllers. Lower values will result in faster synchronization, but might increase the load on the controller cluster. |
|
|
500 resources |
Integer; no constraint. |
Minimum number of resources to retrieve from the back-end for each
synchronization chunk. The expected number of synchronization chunks
is given by the ratio between |
|
|
False |
Boolean; no constraint. |
When this option is enabled, the operational status will always be
retrieved from the NSX back-end ad every |
When running multiple OpenStack Networking server instances, the status
synchronization task should not run on every node; doing so sends
unnecessary traffic to the NSX back-end and performs unnecessary DB
operations. Set the state_sync_interval
configuration option to a
non-zero value exclusively on a node designated for back-end status
synchronization.
The fields=status
parameter in Networking API requests always
triggers an explicit query to the NSX back end, even when you enable
asynchronous state synchronization. For example, GET
/v2.0/networks/NET_ID?fields=status&fields=name
.
This section explains the Big Switch neutron plug-in-specific extension.
Big Switch allows router rules to be added to each project router. These rules can be used to enforce routing policies such as denying traffic between subnets or traffic to external networks. By enforcing these at the router level, network segmentation policies can be enforced across many VMs that have differing security groups.
Each project router has a set of router rules associated with it. Each
router rule has the attributes in this table. Router rules and their
attributes can be set using the openstack router set
command,
through the horizon interface or the Networking API.
Attribute name |
Required |
Input type |
Description |
---|---|---|---|
source |
Yes |
A valid CIDR or one of the keywords 'any' or 'external' |
The network that a packet's source IP must match for the rule to be applied. |
destination |
Yes |
A valid CIDR or one of the keywords 'any' or 'external' |
The network that a packet's destination IP must match for the rule to be applied. |
action |
Yes |
'permit' or 'deny' |
Determines whether or not the matched packets will allowed to cross the router. |
nexthop |
No |
A plus-separated (+) list of next-hop IP addresses. For example,
|
Overrides the default virtual router used to handle traffic for packets that match the rule. |
The order of router rules has no effect. Overlapping rules are evaluated using longest prefix matching on the source and destination fields. The source field is matched first so it always takes higher precedence over the destination field. In other words, longest prefix matching is used on the destination field only if there are multiple matching rules with the same source.
Router rules are configured with a router update operation in OpenStack Networking. The update overrides any previous rules so all rules must be provided at the same time.
Update a router with rules to permit traffic by default but block traffic from external networks to the 10.10.10.0/24 subnet:
$ neutron router-update ROUTER_UUID --router_rules type=dict list=true \ source=any,destination=any,action=permit \ source=external,destination=10.10.10.0/24,action=deny
Specify alternate next-hop addresses for a specific subnet:
$ neutron router-update ROUTER_UUID --router_rules type=dict list=true \ source=any,destination=any,action=permit \ source=10.10.10.0/24,destination=any,action=permit,nexthops=10.10.10.254+10.10.10.253
Block traffic between two subnets while allowing everything else:
$ neutron router-update ROUTER_UUID --router_rules type=dict list=true \ source=any,destination=any,action=permit \ source=10.10.10.0/24,destination=10.20.20.20/24,action=deny
The L3 metering API extension enables administrators to configure IP ranges and assign a specified label to them to be able to measure traffic that goes through a virtual router.
The L3 metering extension is decoupled from the technology that implements the measurement. Two abstractions have been added: One is the metering label that can contain metering rules. Because a metering label is associated with a project, all virtual routers in this project are associated with this label.
Only administrators can manage the L3 metering labels and rules.
This table shows example neutron
commands that enable you to
complete basic L3 metering operations:
Operation |
Command |
---|---|
Creates a metering label. |
$ openstack network meter label create LABEL1 \ --description "DESCRIPTION_LABEL1" |
Lists metering labels. |
$ openstack network meter label list |
Shows information for a specified label. |
$ openstack network meter label show LABEL_UUID $ openstack network meter label show LABEL1 |
Deletes a metering label. |
$ openstack network meter label delete LABEL_UUID $ openstack network meter label delete LABEL1 |
Creates a metering rule. |
$ openstack network meter label rule create LABEL_UUID \ --remote-ip-prefix CIDR \ --direction DIRECTION --exclude For example: $ openstack network meter label rule create label1 \ --remote-ip-prefix 10.0.0.0/24 --direction ingress $ openstack network meter label rule create label1 \ --remote-ip-prefix 20.0.0.0/24 --exclude |
Lists metering all label rules. |
$ openstack network meter label rule list |
Shows information for a specified label rule. |
$ openstack network meter label rule show RULE_UUID |
Deletes a metering label rule. |
$ openstack network meter label rule delete RULE_UUID |
Lists the value of created metering label rules. |
$ ceilometer sample-list -m bandwidth -q resource=LABEL_UUID |
Networking components use Python logging module to do logging. Logging
configuration can be provided in neutron.conf
or as command-line
options. Command options override ones in neutron.conf
.
To configure logging for Networking components, use one of these methods:
Provide logging settings in a logging configuration file.
See Python logging how-to to learn more about logging.
Provide logging setting in neutron.conf
.
[DEFAULT]
# Default log level is WARNING
# Show debugging output in logs (sets DEBUG log level output)
# debug = False
# log_date_format = %Y-%m-%d %H:%M:%S
# use_syslog = False
# syslog_log_facility = LOG_USER
# if use_syslog is False, we can set log_file and log_dir.
# if use_syslog is False and we do not set log_file,
# the log will be printed to stdout.
# log_file =
# log_dir =
Notifications can be sent when Networking resources such as network, subnet and port are created, updated or deleted.
To support DHCP agent, rpc_notifier
driver must be set. To set up the
notification, edit notification options in neutron.conf
:
# Driver or drivers to handle sending notifications. (multi
# valued)
# notification_driver=messagingv2
# AMQP topic used for OpenStack notifications. (list value)
# Deprecated group/name - [rpc_notifier2]/topics
notification_topics = notifications
These options configure the Networking server to send notifications
through logging and RPC. The logging options are described in OpenStack
Configuration Reference . RPC notifications go to notifications.info
queue bound to a topic exchange defined by control_exchange
in
neutron.conf
.
Notification System Options
A notification can be sent when a network, subnet, or port is created, updated or deleted. The notification system options are:
notification_driver
Defines the driver or drivers to handle the sending of a notification. The six available options are:
messaging
Send notifications using the 1.0 message format.
messagingv2
Send notifications using the 2.0 message format (with a message envelope).
routing
Configurable routing notifier (by priority or event_type).
log
Publish notifications using Python logging infrastructure.
test
Store notifications in memory for test verification.
noop
Disable sending notifications entirely.
default_notification_level
Is used to form topic names or to set a logging level.
default_publisher_id
Is a part of the notification payload.
notification_topics
AMQP topic used for OpenStack notifications. They can be comma-separated
values. The actual topic names will be the values of
default_notification_level
.
control_exchange
This is an option defined in oslo.messaging. It is the default exchange
under which topics are scoped. May be overridden by an exchange name
specified in the transport_url
option. It is a string value.
Below is a sample neutron.conf
configuration file:
notification_driver = messagingv2
default_notification_level = INFO
host = myhost.com
default_publisher_id = $host
notification_topics = notifications
control_exchange = openstack
Networking uses the Identity service as the default authentication
service. When the Identity service is enabled, users who submit requests
to the Networking service must provide an authentication token in
X-Auth-Token
request header. Users obtain this token by
authenticating with the Identity service endpoint. For more information
about authentication with the Identity service, see OpenStack Identity
service API v2.0
Reference.
When the Identity service is enabled, it is not mandatory to specify the
project ID for resources in create requests because the project ID is
derived from the authentication token.
The default authorization settings only allow administrative users to create resources on behalf of a different project. Networking uses information received from Identity to authorize user requests. Networking handles two kind of authorization policies:
Operation-based policies specify access criteria for specific operations, possibly with fine-grained control over specific attributes.
Resource-based policies specify whether access to specific resource is granted or not according to the permissions configured for the resource (currently available only for the network resource). The actual authorization policies enforced in Networking might vary from deployment to deployment.
The policy engine reads entries from the policy.json
file. The
actual location of this file might vary from distribution to
distribution. Entries can be updated while the system is running, and no
service restart is required. Every time the policy file is updated, the
policies are automatically reloaded. Currently the only way of updating
such policies is to edit the policy file. In this section, the terms
policy and rule refer to objects that are specified in the same way
in the policy file. There are no syntax differences between a rule and a
policy. A policy is something that is matched directly from the
Networking policy engine. A rule is an element in a policy, which is
evaluated. For instance in "create_subnet":
"rule:admin_or_network_owner"
, create_subnet is a
policy, and admin_or_network_owner is a rule.
Policies are triggered by the Networking policy engine whenever one of
them matches a Networking API operation or a specific attribute being
used in a given operation. For instance the create_subnet
policy is
triggered every time a POST /v2.0/subnets
request is sent to the
Networking server; on the other hand create_network:shared
is
triggered every time the shared attribute is explicitly specified (and
set to a value different from its default) in a POST /v2.0/networks
request. It is also worth mentioning that policies can also be related
to specific API extensions; for instance
extension:provider_network:set
is triggered if the attributes
defined by the Provider Network extensions are specified in an API
request.
An authorization policy can be composed by one or more rules. If more rules are specified then the evaluation policy succeeds if any of the rules evaluates successfully; if an API operation matches multiple policies, then all the policies must evaluate successfully. Also, authorization rules are recursive. Once a rule is matched, the rule(s) can be resolved to another rule, until a terminal rule is reached.
The Networking policy engine currently defines the following kinds of terminal rules:
Role-based rules evaluate successfully if the user who submits
the request has the specified role. For instance "role:admin"
is
successful if the user who submits the request is an administrator.
Field-based rules evaluate successfully if a field of the
resource specified in the current request matches a specific value.
For instance "field:networks:shared=True"
is successful if the
shared
attribute of the network
resource is set to true.
Generic rules compare an attribute in the resource with an
attribute extracted from the user's security credentials and
evaluates successfully if the comparison is successful. For instance
"tenant_id:%(tenant_id)s"
is successful if the project identifier
in the resource is equal to the project identifier of the user
submitting the request.
This extract is from the default policy.json
file:
A rule that evaluates successfully if the current user is an administrator or the owner of the resource specified in the request (project identifier is equal).
{
"admin_or_owner": [
[
"role:admin"
],
[
"tenant_id:%(tenant_id)s"
]
],
"admin_or_network_owner": [
[
"role:admin"
],
[
"tenant_id:%(network_tenant_id)s"
]
],
"admin_only": [
[
"role:admin"
]
],
"regular_user": [],
"shared": [
[
"field:networks:shared=True"
]
],
"default": [
[
The default policy that is always evaluated if an API operation does
not match any of the policies in policy.json
.
"rule:admin_or_owner"
]
],
"create_subnet": [
[
"rule:admin_or_network_owner"
]
],
"get_subnet": [
[
"rule:admin_or_owner"
],
[
"rule:shared"
]
],
"update_subnet": [
[
"rule:admin_or_network_owner"
]
],
"delete_subnet": [
[
"rule:admin_or_network_owner"
]
],
"create_network": [],
"get_network": [
[
"rule:admin_or_owner"
],
This policy evaluates successfully if either admin_or_owner, or shared evaluates successfully.
[
"rule:shared"
]
],
"create_network:shared": [
[
"rule:admin_only"
]
This policy restricts the ability to manipulate the shared attribute for a network to administrators only.
],
"update_network": [
[
"rule:admin_or_owner"
]
],
"delete_network": [
[
"rule:admin_or_owner"
]
],
"create_port": [],
"create_port:mac_address": [
[
"rule:admin_or_network_owner"
]
],
"create_port:fixed_ips": [
This policy restricts the ability to manipulate the mac_address attribute for a port only to administrators and the owner of the network where the port is attached.
[
"rule:admin_or_network_owner"
]
],
"get_port": [
[
"rule:admin_or_owner"
]
],
"update_port": [
[
"rule:admin_or_owner"
]
],
"delete_port": [
[
"rule:admin_or_owner"
]
]
}
In some cases, some operations are restricted to administrators only. This example shows you how to modify a policy file to permit project to define networks, see their resources, and permit administrative users to perform all other operations:
{
"admin_or_owner": [["role:admin"], ["tenant_id:%(tenant_id)s"]],
"admin_only": [["role:admin"]], "regular_user": [],
"default": [["rule:admin_only"]],
"create_subnet": [["rule:admin_only"]],
"get_subnet": [["rule:admin_or_owner"]],
"update_subnet": [["rule:admin_only"]],
"delete_subnet": [["rule:admin_only"]],
"create_network": [],
"get_network": [["rule:admin_or_owner"]],
"create_network:shared": [["rule:admin_only"]],
"update_network": [["rule:admin_or_owner"]],
"delete_network": [["rule:admin_or_owner"]],
"create_port": [["rule:admin_only"]],
"get_port": [["rule:admin_or_owner"]],
"update_port": [["rule:admin_only"]],
"delete_port": [["rule:admin_only"]]
}