The Crowbar network barclamp provides two functions for the system.
The first is a common role to instantiate network interfaces on the
Crowbar managed systems. The other function is address pool management.
While the addresses can be managed with the YaST Crowbar module,
complex network setups require to manually edit the network barclamp
template file /etc/crowbar/network.json
. This section
explains the file in detail. Settings in this file are applied to all
nodes in SUSE OpenStack Cloud.
After you have completed the SUSE OpenStack Cloud Crowbar installation, you cannot change the network setup anymore. If doing so, you would need to completely set up the Administration Server again.
The only exception from this rule is the interface map. This section can be changed at a later stage as well. See Section D.3, “Interface Map” for details.
network.json
#
The network.json
is located in
/etc/crowbar/
. To edit it, open it in an editor of
your choice. The template has the following general structure:
{ "attributes" : { "network" : { "mode" : "VALUE", "start_up_delay" : VALUE, "teaming" : { "mode": VALUE },1 "enable_tx_offloading" : VALUE, "enable_rx_offloading" : VALUE, "interface_map"2 : [ ... ], "conduit_map"3 : [ ... ], "networks"4 : { ... }, } } }
General attributes. Refer to Section D.2, “Global Attributes” for details. | |
Interface map section. Defines the order in which the physical network interfaces are to be used. Refer to Section D.3, “Interface Map” for details. | |
Network conduit section defining the network modes and the network interface usage. Refer to Section D.4, “Network Conduits” for details. | |
Network definition section. Refer to Section D.5, “Network Definitions” for details. |
The order in which the entries in the network.json
file appear may differ from the one listed above. Use your editor's
search function to find certain entries.
The most important options to define in the global attributes section are the default values for the network and bonding modes. The following global attributes exist:
{ "attributes" : { "network" : { "mode" : "single",1 "start_up_delay" : 30,2 "teaming" : { "mode": 5 },3 "enable_tx_offloading" : true, 4 "enable_rx_offloading" : true, 4 "interface_map" : [ ... ], "conduit_map" : [ ... ], "networks" : { ... }, } } }
Network mode. Defines the configuration name (or name space) to be used from the conduit_map (see Section D.4, “Network Conduits”). This allows to define multiple configurations (single, dual, and team are preconfigured) and switch them by changing this parameter. | |
Time (in seconds) the Chef-client waits for the network interfaces to become online before running into a time-out. | |
Default bonding mode. For a list of available modes, see Section 7.3, “. ” | |
Turn on/off TX and RX checksum offloading. If set to
Checksum offloading is set to ![]() Important: Change of the Default Value
Starting with SUSE OpenStack Cloud the default value for TX and RX checksum
offloading changed from
To check, which defaults a network driver uses, run
Note that if the output shows a value marked as
|
By default physical network interfaces are used in the order they appear
under /sys/class/net/
. If you want to apply a
different order, you need to create an interface map where you can
specify a custom order of the bus IDs. Interface maps are created for
specific hardware configurations and are applied to all machines matching
this configuration.
{ "attributes" : { "network" : { "mode" : "single", "start_up_delay" : 30, "teaming" : { "mode": 5 }, "enable_tx_offloading" : true , "enable_rx_offloading" : true , "interface_map" : [ { "pattern" : "PowerEdge R610"1, "serial_number" : "0x02159F8E"2, "bus_order" : [3 "0000:00/0000:00:01", "0000:00/0000:00:03" ] } ... ], "conduit_map" : [ ... ], "networks" : { ... }, } } }
Hardware specific identifier. This identifier can be obtained by
running the command | |
Additional hardware specific identifier. This identifier can be used in
case two machines have the same value for | |
Bus IDs of the interfaces. The order in which they are listed here
defines the order in which Chef addresses the interfaces. The IDs
can be obtained by listing the contents of
|
The physical interface used to boot the node via PXE must always be listed first.
Contrary to all other sections in network.json
, you
can change interface maps after having completed the SUSE OpenStack Cloud Crowbar installation. However,
nodes that are already deployed and affected by these changes need to be
deployed again. Therefore it is not recommended to make changes to the
interface map that affect active nodes.
If you change the interface mappings after having completed the SUSE OpenStack Cloud Crowbar installation
you must not make your changes
by editing network.json
. You must rather use the
Crowbar Web interface and open › › › .
Activate your changes by clicking .
Get the machine identifier by running the following command on the machine to which the map should be applied:
~ # dmidecode -s system-product-name AS 2003R
The resulting string needs to be entered on the
http://www.ruby-doc.org/core-2.0/Regexp.html for
a reference). Unless the pattern starts with ^
and
ends with $
a substring match is performed against
the name return from the above commands.
List the interface devices in /sys/class/net
to
get the current order and the bus ID of each interface:
~ # ls -lgG /sys/class/net/ | grep eth lrwxrwxrwx 1 0 Jun 19 08:43 eth0 -> ../../devices/pci0000:00/0000:00:1c.0/0000:09:00.0/net/eth0 lrwxrwxrwx 1 0 Jun 19 08:43 eth1 -> ../../devices/pci0000:00/0000:00:1c.0/0000:09:00.1/net/eth1 lrwxrwxrwx 1 0 Jun 19 08:43 eth2 -> ../../devices/pci0000:00/0000:00:1c.0/0000:09:00.2/net/eth2 lrwxrwxrwx 1 0 Jun 19 08:43 eth3 -> ../../devices/pci0000:00/0000:00:1c.0/0000:09:00.3/net/eth3
The bus ID is included in the path of the link target—it is
the following string: ../../devices/pciBUS
ID/net/eth0
Create an interface map with the bus ID listed in the order the
interfaces should be used. Keep in mind that the interface from which
the node is booted using PXE must be listed first. In the following
example the default interface order has been changed to
eth0
,
eth2
,
eth1
and
eth3
.
{ "attributes" : { "network" : { "mode" : "single", "start_up_delay" : 30, "teaming" : { "mode": 5 }, "enable_tx_offloading" : true, "enable_rx_offloading" : true, "interface_map" : [ { "pattern" : "AS 2003R", "bus_order" : [ "0000:00/0000:00:1c.0/0000:09:00.0", "0000:00/0000:00:1c.0/0000:09:00.2", "0000:00/0000:00:1c.0/0000:09:00.1", "0000:00/0000:00:1c.0/0000:09:00.3" ] } ... ], "conduit_map" : [ ... ], "networks" : { ... }, } } }
Network conduits define mappings for logical interfaces—one or more physical interfaces bonded together. Each conduit can be identified by a unique name, the “Network Mode” in this document.
. This pattern is also calledSeveral network modes are already pre-defined. The most important ones are:
single: Only use the first interface for all networks. VLANs will be added on top of this single interface. |
dual: Use the first interface as the admin interface and the second one for all other networks. VLANs will be added on top of the second interface. |
team: Bond the first two or more interfaces. VLANs will be added on top of the bond. |
See Section 2.1.2, “Network Modes” for detailed
descriptions. Apart from these modes a fallback mode
".*/.*/.*"
is also pre-defined—it is applied
in case no other mode matches the one specified in the global attributes
section. These modes can be adjusted according to your needs. It is also
possible to define a custom mode.
The mode name that is specified with mode
in the
global attributes section is deployed on all nodes in SUSE OpenStack Cloud. It is
not possible to use a different mode for a certain node. However, you can
define “sub” modes with the same name that only match the
following machines:
Machines with a certain number of physical network interfaces.
Machines with certain roles (all Compute Nodes for example).
{ "attributes" : { "network" : { "mode" : "single", "start_up_delay" : 30, "teaming" : { "mode": 5 }, "enable_tx_offloading" : true, "enable_rx_offloading" : true, "interface_map" : [ ... ], "conduit_map" : [ { "pattern" : "team/.*/.*"1, "conduit_list" : { "intf2"2 : { "if_list" : ["1g1","1g2"]3, "team_mode" : 54 }, "intf1" : { "if_list" : ["1g1","1g2"], "team_mode" : 5 }, "intf0" : { "if_list" : ["1g1","1g2"], "team_mode" : 5 } } }, ... ], "networks" : { ... }, } } }
This line contains the pattern definition for a mode. The value for pattern must have the following form: MODE_NAME/NUMBER_OF_NICS/NODE_ROLE It is interpreted as a Ruby regular expression (see http://www.ruby-doc.org/core-2.0/Regexp.html for a reference).
| ||||||||||||||
The logical network interface definition. Each conduit list must
contain at least one such definition. This line defines the name of the
logical interface. This identifier must be unique and will also be
referenced in the network definition section. It is recommended to
stick with the pre-defined naming scheme: | ||||||||||||||
This line maps one or more physical interfaces to the logical interface. Each entry represents a physical interface. If more than one entry exists, the interfaces are bonded—either with the mode defined in the attribute of this conduit section. Or, if that is not present, by the globally defined attribute. The physical interfaces definition needs to fit the following pattern: [Quantifier][Speed][Order]
Valid examples are
| ||||||||||||||
The bonding mode to be used for this logical interface. Overwrites the default set in the global attributes section for this interface. See https://www.kernel.org/doc/Documentation/networking/bonding.txt for a list of available modes. Specifying this option is optional—if not specified here, the global setting applies. |
The following example defines a network mode named
my_mode
for nodes with 6, 3 and an arbitrary number
of network interfaces. Since the first mode that matches is applied, it
is important that the specific modes (for 6 and 3 NICs) are listed
before the general one:
{ "attributes" : { "network" : { "mode" : "single", "start_up_delay" : 30, "teaming" : { "mode": 5 }, "enable_tx_offloading" : true, "enable_rx_offloading" : true, "interface_map" : [ ... ], "conduit_map" : [ { "pattern" : "my_mode/6/.*", "conduit_list" : { ... } }, { "pattern" : "my_mode/3/.*", "conduit_list" : { ... } }, { "pattern" : "my_mode/.*/.*", "conduit_list" : { ... } }, ... ], "networks" : { ... }, } } }
The following example defines network modes for Compute Nodes with
four physical interfaces, the Administration Server (role
crowbar
), the Control Node, and a general mode
applying to all other nodes.
{ "attributes" : { "network" : { "mode" : "single", "start_up_delay" : 30, "teaming" : { "mode": 5 }, "enable_tx_offloading" : true, "enable_rx_offloading" : true, "interface_map" : [ ... ], "conduit_map" : [ { "pattern" : "my_mode/4/nova-compute", "conduit_list" : { ... } }, { "pattern" : "my_mode/.*/crowbar", "conduit_list" : { ... } }, { "pattern" : "my_mode/.*/nova-controller", "conduit_list" : { ... } }, { "pattern" : "my_mode/.*/.*", "conduit_list" : { ... } }, ... ], "networks" : { ... }, } } }
The following values for node_role
can be used:
ceilometer-polling |
ceilometer-server |
ceph-calamari |
ceph-mon |
ceph-osd |
ceph-radosgw |
cinder-controller |
cinder-volume |
crowbar |
database-server |
glance-server |
heat-server |
horizon-server |
keystone-server |
manila-server |
manila-share |
neutron-network |
neutron-server |
nova-controller |
nova-compute-* |
rabbitmq-server |
trove-server |
swift-dispersion |
swift-proxy |
swift-ring-compute |
swift-storage |
The role crowbar
refers to the Administration Server.
Apart from the roles listed under Example D.3, “Network Modes for Certain Roles”
each node in SUSE OpenStack Cloud has a unique role, which lets you create modes
matching exactly one node. The role is named after the scheme
crowbar-d FULLY QUALIFIED
HOSTNAME
. The FULLY QUALIFIED
HOSTNAME in turn is composed of the following: the MAC address of the
network interface used to boot the node via PXE, and the domain name
configured on the Administration Server. Colons and periods are replaced with
underscores. An example role name would be:
crowbar-d1a-12-05-1e-35-49_my_cloud
.
Network mode definitions for certain machines must be listed first in the conduit map. This prevents other, general rules which would also map from being applied.
{ "attributes" : { "network" : { "mode" : "single", "start_up_delay" : 30, "teaming" : { "mode": 5 }, "enable_tx_offloading" : true, "enable_rx_offloading" : true, "interface_map" : [ ... ], "conduit_map" : [ { "pattern" : "my_mode/.*/crowbar-d1a-12-05-1e-35-49_my_cloud", "conduit_list" : { ... } }, ... ], "networks" : { ... }, } } }
The network definitions contain IP address assignments, the bridge and VLAN setup and settings for the router preference. Each network is also assigned to a logical interface defined in the network conduit section. In the following the network definition is explained using the example of the admin network definition:
{ "attributes" : { "network" : { "mode" : "single", "start_up_delay" : 30, "teaming" : { "mode": 5 }, "enable_tx_offloading" : true, "enable_rx_offloading" : true, "interface_map" : [ ... ], "conduit_map" : [ ... ], "networks" : { "admin" : { "conduit" : "intf0"1, "add_bridge" : false2, "use_vlan" : false3, "vlan" : 1004, "router_pref" : 105, "subnet" : "192.168.124.0"6, "netmask" : "255.255.255.0", "router" : "192.168.124.1", "broadcast" : "192.168.124.255", "ranges" : { "admin" : { "start" : "192.168.124.10", "end" : "192.168.124.11" }, "switch" : { "start" : "192.168.124.241", "end" : "192.168.124.250" }, "dhcp" : { "start" : "192.168.124.21", "end" : "192.168.124.80" }, "host" : { "start" : "192.168.124.81", "end" : "192.168.124.160" } } }, "nova_floating": { "add_ovs_bridge": false7, "bridge_name": "br-public"8, .... } ... }, } } }
Logical interface assignment. The interface must be defined in the network conduit section and must be part of the active network mode. | |
Bridge setup. Do not touch. Should be | |
Create a VLAN for this network. Changing this setting is not recommended. | |
ID of the VLAN. Change this to the VLAN ID you intend to use for the
specific network if required. This setting can also be changed using
the YaST Crowbar interface. The VLAN ID for the
| |
Router preference, used to set the default route. On nodes hosting
multiple networks the router with the lowest
| |
Network address assignments. These values can also be changed by using the YaST Crowbar interface. | |
Openvswitch virtual switch setup. This attribute is maintained by Crowbar on a per-node level and should not be changed manually. | |
Name of the openvswitch virtual switch. This attribute is maintained by Crowbar on a per-node level and should not be changed manually. |
As of SUSE OpenStack Cloud 7, using a VLAN for the admin network is only supported on a native/untagged VLAN. If you need VLAN support for the admin network, it must be handled at switch level.
When deploying Compute Nodes with Microsoft Hyper-V or Windows Server, you must not use openvswitch with gre. Instead, use openvswitch with VLAN (recommended) or linuxbridge as a plugin for Neutron.
When changing the network configuration with YaST or by editing
/etc/crowbar/network.json
you can define VLAN
settings for each network. For the networks nova-fixed
and nova-floating
, however, special rules apply:
nova-fixed: The setting will be ignored. However, VLANs will automatically be used if deploying Neutron with VLAN support (using the plugins linuxbridge, openvswitch plus VLAN or cisco plus VLAN). In this case, you need to specify a correct for this network.
nova-floating: When using a VLAN for
nova-floating
(which is the default), the and settings for
and need to be
the same. When not using a VLAN for nova-floating
, it
needs to use a different physical network interface than the
nova_fixed
network.