Applies to SUSE OpenStack Cloud 7

10 Deploying the OpenStack Services

After the nodes are installed and configured you can start deploying the OpenStack components to finalize the installation. The components need to be deployed in a given order, because they depend on one another. The Pacemaker component for an HA setup is the only exception from this rule—it can be set up at any time. However, when deploying SUSE OpenStack Cloud from scratch, it is recommended to deploy the Pacemaker proposal(s) first. Deployment for all components is done from the Crowbar Web interface through recipes, so-called barclamps.

The components controlling the cloud (including storage management and control components) need to be installed on the Control Node(s) (refer to Section 1.2, “The Control Node(s)” for more information). However, you may not use your Control Node(s) as a compute node or storage host for Swift or Ceph. Here is a list with components that may not be installed on the Control Node(s): swift-storage, all Ceph components, nova-compute-*. These components need to be installed on dedicated nodes.

When deploying an HA setup, the controller nodes are replaced by one or more controller clusters consisting of at least two nodes (three are recommended). Setting up three separate clusters—for data, services, and networking—is recommended. See Section 2.6, “High Availability” for more information on requirements and recommendations for an HA setup.

The OpenStack components need to be deployed in the following order. For general instructions on how to edit and deploy barclamp, refer to Section 8.3, “Deploying Barclamp Proposals”. Deploying Pacemaker (only needed for an HA setup), Swift and Ceph is optional; all other components must be deployed.

10.1 Deploying Pacemaker (Optional, HA Setup Only)

To make the SUSE OpenStack Cloud controller functions and the Compute Nodes highly available, set up one or more clusters by deploying Pacemaker (see Section 2.6, “High Availability” for details). Since it is possible (and recommended) to deploy more than one cluster, a separate proposal needs to be created for each cluster.

Deploying Pacemaker is optional. In case you do not want to deploy it, skip this section and start the node deployment by deploying the database as described in Section 10.2, “Deploying the Database”.

Note
Note: Number of Cluster Nodes

To set up a cluster, at least two nodes are required. If setting up a cluster for storage with replicated storage via DRBD (for example for a cluster for the database and RabbitMQ), exactly two nodes are required. For all other setups an odd number of nodes with a minimum of three nodes is strongly recommended. See Section 2.6.5, “Cluster Requirements and Recommendations” for more information.

To create a proposal, go to Barclamps › OpenStack and click Edit for the Pacemaker barclamp. A drop-down box where you can enter a name and a description for the proposal opens. Click Create to open the configuration screen for the proposal.

Create Pacemaker Proposal
Important
Important: Proposal Name

The name you enter for the proposal will be used to generate host names for the virtual IPs of HAProxy. By default, the names follow this scheme:

cluster-PROPOSAL_NAME.FQDN (for the internal name)
public.cluster-PROPOSAL_NAME.FQDN (for the public name)

For example, when PROPOSAL_NAME is set to data, this results in the following names:

cluster-data.example.com
public.cluster-data.example.com

For requirements regarding SSL encryption and certificates, see Section 2.3, “SSL Encryption”.

The following options are configurable in the Pacemaker configuration screen:

Transport for Communication

Choose a technology used for cluster communication. You can choose between Multicast (UDP), (sending a message to multiple destinations) or Unicast (UDPU) (sending a message to a single destination). By default unicast is used.

Policy when cluster does not have quorum

Whenever communication fails between one or more nodes and the rest of the cluster a cluster partition occurs. The nodes of a cluster are split in partitions but are still active. They can only communicate with nodes in the same partition and are unaware of the separated nodes. The cluster partition that has the majority of nodes is defined to have quorum.

This configuration option defines what to do with the cluster partition(s) that do not have the quorum. See http://www.suse.com/documentation/sle-ha-12/book_sleha/data/sec_ha_config_basics_global.html, section Option no-quorum-policy for details.

The recommended setting is to choose Stop. However, Ignore is enforced for two-node clusters to ensure that the remaining node continues to operate normally in case the other node fails. For clusters using shared resources, choosing freeze may be used to ensure that these resources continue to be available.

STONITH: Configuration mode for STONITH

Misbehaving nodes in a cluster are shut down to prevent it from causing trouble. This mechanism is called STONITH (Shoot the other node in the head). STONITH can be configured in a variety of ways, refer to http://www.suse.com/documentation/sle-ha-12/book_sleha/data/cha_ha_fencing.html for details. The following configuration options exist:

Configured manually

STONITH will not be configured when deploying the barclamp. It needs to be configured manually as described in http://www.suse.com/documentation/sle-ha-12/book_sleha/data/cha_ha_fencing.html. For experts only.

Configured with IPMI data from the IPMI barclamp

Using this option automatically sets up STONITH with data received from the IPMI barclamp. Being able to use this option requires that IPMI is configured for all cluster nodes. This should be done by default, when deploying cloud. To check or change the IPMI deployment, go to Barclamps › Crowbar › IPMI › Edit. Also make sure the Enable BMC option is set to true on this barclamp.

Important
Important: STONITH Devices Must Support IPMI

To configure STONITH with the IPMI data, all STONITH devices must support IPMI. Problems with this setup may occur with IPMI implementations that are not strictly standards compliant. In this case it is recommended to set up STONITH with STONITH block devices (SBD).

Configured with STONITH Block Devices (SBD)

This option requires to manually set up shared storage and a watchdog on the cluster nodes before applying the proposal. To do so, proceed as follows:

  1. Prepare the shared storage. The path to the shared storage device must be persistent and consistent across all nodes in the cluster. The SBD device must not use host-based RAID, cLVM2, nor reside on a DRBD* instance.

  2. Install the package sbd on all cluster nodes.

  3. Initialize SBD device with by running the following command. Make sure to replace /dev/SBD with the path to the shared storage device.

    sbd -d /dev/SBD create

    Refer to http://www.suse.com/documentation/sle-ha-12/book_sleha/data/sec_ha_storage_protect_fencing.html#pro_ha_storage_protect_sbd_create for details.

In Kernel module for watchdog, specify the respective kernel module to be used. Find the most commonly used watchdog drivers in the following table:

HardwareDriver
HPhpwdt
Dell, Fujitsu, Lenovo (Intel TCO)iTCO_wdt
VM on z/VM on IBM mainframevmwatchdog
Xen VM (DomU)xen_xdt
Genericsoftdog

If your hardware is not listed above, either ask your hardware vendor for the right name or check the following directory for a list of choices: /lib/modules/KERNEL_VERSION/kernel/drivers/watchdog.

Alternatively, list the drivers that have been installed with your kernel version:

root # rpm -ql kernel-VERSION | grep watchdog

If the nodes need different watchdog modules, leave the text box empty.

After the shared storage has been set up, specify the path using the by-id notation (/dev/disk/by-id/DEVICE). It is possible to specify multiple paths as a comma-separated list.

Deploying the barclamp will automatically complete the SBD setup on the cluster nodes by starting the SBD daemon and configuring the fencing resource.

Configured with one shared resource for the whole cluster

All nodes will use the identical configuration. Specify the Fencing Agent to use and enter Parameters for the agent.

To get a list of STONITH devices which are supported by the High Availability Extension, run the following command on an already installed cluster nodes: stonith -L. The list of parameters depends on the respective agent. To view a list of parameters use the following command:

stonith -t agent -n
Configured with one resource per node

All nodes in the cluster use the same Fencing Agent, but can be configured with different parameters. This setup is, for example, required when nodes are in different chassis and therefore need different ILO parameters.

To get a list of STONITH devices which are supported by the High Availability Extension, run the following command on an already installed cluster nodes: stonith -L. The list of parameters depends on the respective agent. To view a list of parameters use the following command:

stonith -t agent -n
Configured for nodes running in libvirt

Use this setting for completely virtualized test installations. This option is not supported.

STONITH: Do not start corosync on boot after fencing

With STONITH, Pacemaker clusters with two nodes may sometimes hit an issue known as STONITH deathmatch where each node kills the other one, resulting in both nodes rebooting all the time. Another similar issue in Pacemaker clusters is the fencing loop, where a reboot caused by STONITH will not be enough to fix a node and it will be fenced again and again.

This setting can be used to limit these issues. When set to true, a node that has not been properly shut down or rebooted will not start the services for Pacemaker on boot. Instead, the node will wait for action from the SUSE OpenStack Cloud operator. When set to false, the services for Pacemaker will always be started on boot. The Automatic value is used to have the most appropriate value automatically picked: it will be true for two-node clusters (to avoid STONITH deathmatches), and false otherwise.

When a node boots but not starts corosync because of this setting, then the node's status is in the Node Dashboard is set to "Problem" (red dot). To make this node usable again, see Section J.2, “Re-adding the Node to the Cluster”.

Mail Notifications: Enable Mail Notifications

Get notified of cluster node failures via e-mail. If set to true, you need to specify which SMTP Server to use, a prefix for the mails' subject and sender and recipient addresses. Note that the SMTP server must be accessible by the cluster nodes.

DRBD: Prepare Cluster for DRBD

Set up DRBD for replicated storage on the cluster. This option requires a two-node cluster with a spare hard disk for each node. The disks should have a minimum size of 100 GB. Using DRBD is recommended for making the database and RabbitMQ highly available. For other clusters, set this option to False.

HAProxy: Public name for public virtual IP

The public name is the host name that will be used instead of the generated public name (see Important: Proposal Name) for the public virtual IP of HAProxy. (This is the case when registering public endpoints, for example). Any name specified here needs to be resolved by a name server placed outside of the SUSE OpenStack Cloud network.

The Pacemaker Barclamp
Figure 10.1: The Pacemaker Barclamp

The Pacemaker component consists of the following roles. Deploying the hawk-server role is optional:

pacemaker-cluster-member

Deploy this role on all nodes that should become member of the cluster.

hawk-server

Deploying this role is optional. If deployed, sets up the Hawk Web interface which lets you monitor the status of the cluster. The Web interface can be accessed via https://IP-ADDRESS:7630. Note that the GUI on SUSE OpenStack Cloud can only be used to monitor the cluster status and not to change its configuration.

hawk-server may be deployed on at least one cluster node. It is recommended to deploy it on all cluster nodes.

pacemaker-remote

Deploy this role on all nodes that should become members of the Compute Nodes cluster. They will run as Pacemaker remote nodes that are controlled by the cluster, but do not affect quorum. Instead of the complete cluster stack, only the pacemaker-remote component will be installed on this nodes.

The Pacemaker Barclamp: Node Deployment Example
Figure 10.2: The Pacemaker Barclamp: Node Deployment Example

After a cluster has been successfully deployed, it is listed under Available Clusters in the Deployment section and can be used for role deployment like a regular node.

Warning
Warning: Deploying Roles on Single Cluster Nodes

When using clusters, roles from other barclamps must never be deployed to single nodes that are already part of a cluster. The only exceptions from this rule are the following roles:

  • cinder-volume

  • swift-proxy + swift-dispersion

  • swift-ring-compute

  • swift-storage

Important
Important: Service Management on the Cluster

After a role has been deployed on a cluster, its services are managed by the HA software. You must never manually start or stop an HA-managed service (or configure it to start on boot). Services may only be started or stopped by using the cluster management tools Hawk or the crm shell. See http://www.suse.com/documentation/sle-ha-12/book_sleha/data/sec_ha_config_basics_resources.html for more information.

Note
Note: Testing the Cluster Setup

To check whether all cluster resources are running, either use the Hawk Web interface or run the command crm_mon -1r. If it is not the case, clean up the respective resource with crm resource cleanup RESOURCE , so it gets respawned.

Also make sure that STONITH correctly works before continuing with the SUSE OpenStack Cloud setup. This is especially important when having chosen a STONITH configuration requiring manual setup. To test if STONITH works, log in to a node on the cluster and run the following command:

pkill -9 corosync

In case STONITH is correctly configured, the node will reboot.

Before testing on a production cluster, plan a maintenance window in case issues should arise.

10.2 Deploying the Database

The very first service that needs to be deployed is the Database. The database component is using PostgreSQL and is used by all other components. It must be installed on a Control Node. The Database can be made highly available by deploying it on a cluster.

The only attribute you may change is the maximum number of database connections (Global Connection Limit ). The default value should usually work—only change it for large deployments in case the log files show database connection failures.

The Database Barclamp
Figure 10.3: The Database Barclamp

10.2.1 HA Setup for the Database

To make the database highly available, deploy it on a cluster instead of a single Control Node. This also requires shared storage for the cluster that hosts the database data. To achieve this, either set up a cluster with DRBD support (see Section 10.1, “Deploying Pacemaker (Optional, HA Setup Only)”) or use traditional shared storage like an NFS share. It is recommended to use a dedicated cluster to deploy the database together with RabbitMQ, since both components require shared storage.

Deploying the database on a cluster makes an additional High Availability section available in the Attributes section of the proposal. Configure the Storage Mode in this section. There are two options:

DRBD

This option requires a two-node cluster that has been set up with DRBD. Also specify the Size to Allocate for DRBD Device (in Gigabytes). The suggested value of 50 GB should be sufficient.

Shared Storage

Use a shared block device or an NFS mount for shared storage. Concordantly with the mount command, you need to specify three attributes: Name of Block Device or NFS Mount Specification (the mount point), the Filesystem Type and the Mount Options. Refer to man 8 mount for details on file system types and mount options.

Important
Important: NFS Export Options for Shared Storage

To use an NFS share as shared storage for a cluster, export it on the NFS server with the following options:

rw,async,insecure,no_subtree_check,no_root_squash

In case mounting the NFS share on the cluster nodes fails, change the export options and re-apply the proposal. However, before doing so, you need to clean up the respective resources on the cluster nodes as described in http://www.suse.com/documentation/sle-ha-12/book_sleha/data/sec_ha_config_crm.html#sec_ha_manual_config_cleanup.

Important
Important: Ownership of a Shared NFS Directory

The shared NFS directory that is used for the PostgreSQL database needs to be owned by the same user ID and group ID as of the postgres user on the HA database cluster.

To get the IDs, log in to one of the HA database cluster machines and issue the following commands:

id -g postgres
getent group postgres | cut -d: -f3

The first command returns the numeric user ID, the second one the numeric group ID. Now log in to the NFS server and change the ownership of the shared NFS directory, for example:

chown UID.GID /exports/cloud/db

Replace UID and GID by the respective numeric values retrieved above.

Warning
Warning: Re-Deploying SUSE OpenStack Cloud with Shared Storage

When re-deploying SUSE OpenStack Cloud and reusing a shared storage hosting database files from a previous installation, the installation may fail due to the old database being used. Always delete the old databasethat is to be used from the shared storage before re-deploying SUSE OpenStack Cloud.

10.3 Deploying RabbitMQ

The RabbitMQ messaging system enables services to communicate with the other nodes via Advanced Message Queue Protocol (AMQP). Deploying it is mandatory. RabbitMQ needs to be installed on a Control Node. RabbitMQ can be made highly available by deploying it on a cluster. It is recommended not to change the default values of the proposal's attributes.

Virtual Host

Name of the default virtual host to be created and used by the RabbitMQ server (default_vhost configuration option in rabbitmq.config).

Port

Port the RabbitMQ server listens on (tcp_listeners configuration option in rabbitmq.config).

User

RabbitMQ default user (default_user configuration option in rabbitmq.config).

The RabbitMQ Barclamp
Figure 10.4: The RabbitMQ Barclamp

10.3.1 HA Setup for RabbitMQ

To make RabbitMQ highly available, deploy it on a cluster instead of a single Control Node. This also requires shared storage for the cluster that hosts the RabbitMQ data. To achieve this, either set up a cluster with DRBD support (see Section 10.1, “Deploying Pacemaker (Optional, HA Setup Only)”) or use traditional shared storage like an NFS share. It is recommended to use a dedicated cluster to deploy RabbitMQ together with the database, since both components require shared storage.

Deploying RabbitMQ on a cluster makes an additional High Availability section available in the Attributes section of the proposal. Configure the Storage Mode in this section. There are two options:

DRBD

This option requires a two-node cluster that has been set up with DRBD. Also specify the Size to Allocate for DRBD Device (in Gigabytes). The suggested value of 50 GB should be sufficient.

Shared Storage

Use a shared block device or an NFS mount for shared storage. Concordantly with the mount command, you need to specify three attributes: Name of Block Device or NFS Mount Specification (the mount point), the Filesystem Type and the Mount Options.

Important
Important: NFS Export Options for Shared Storage

An NFS share for use as a shared storage for a cluster needs to be exported on the NFS server with the following options:

rw,async,insecure,no_subtree_check,no_root_squash

In case mounting the NFS share on the cluster nodes fails, change the export options and re-apply the proposal. Before doing so, however, you need to clean up the respective resources on the cluster nodes as described in http://www.suse.com/documentation/sle-ha-12/book_sleha/data/sec_ha_config_crm.html#sec_ha_manual_config_cleanup.

10.4 Deploying Keystone

Keystone is another core component that is used by all other OpenStack components. It provides authentication and authorization services. Keystone needs to be installed on a Control Node. Keystone can be made highly available by deploying it on a cluster. You can configure the following parameters of this barclamp:

Algorithm for Token Generation

Set the algorithm used by Keystone to generate the tokens. You can choose between Fernet (the default) or UUID. Note that for performance and security reasons it is strongly ecommended to use Fernet.

Region Name

Allows to customize the region name that crowbar is going to manage.

Default Credentials: Default Tenant

Tenant for the users. Do not change the default value of openstack.

Default Credentials: Administrator User Name/Password

User name and password for the administrator.

Default Credentials: Create Regular User

Specify whether a regular user should be created automatically. Not recommended in most scenarios, especially in an LDAP environment.

Default Credentials: Regular User Username/Password

User name and password for the regular user. Both the regular user and the administrator accounts can be used to log in to the SUSE OpenStack Cloud Dashboard. However only the administrator can manage Keystone users and access.

The Keystone Barclamp
Figure 10.5: The Keystone Barclamp
SSL Support: Protocol

When sticking with the default value HTTP, public communication will not be encrypted. Choose HTTPS to use SSL for encryption. See Section 2.3, “SSL Encryption” for background information and Section 9.4.6, “Enabling SSL” for installation instructions. The following additional configuration options will become available when choosing HTTPS:

Generate (self-signed) certificates

When set to true, self-signed certificates are automatically generated and copied to the correct locations. This setting is for testing purposes only and should never be used in production environments!

SSL Certificate File / SSL (Private) Key File

Location of the certificate key pair files.

SSL Certificate is insecure

Set this option to true when using self-signed certificates to disable certificate checks. This setting is for testing purposes only and should never be used in production environments!

SSL CA Certificates File

Specify the absolute path to the CA certificate here. This option can only be changed if Require Client Certificate was set to true.

The SSL Dialog
Figure 10.6: The SSL Dialog

10.4.1 LDAP Authentication with Keystone

By default Keystone uses an SQL database back-end store for authentication. LDAP can be used in addition to the default or as an alternative. Using LDAP requires the Control Node on which Keystone is installed to be able to contact the LDAP server. See Appendix D, The Network Barclamp Template File for instructions on how to adjust the network setup.

10.4.1.1 Using LDAP for Authentication

To configure LDAP as an alternative to the SQL database back-end store, you need to open the Keystone barclamp Attribute configuration in Raw mode. Search for the ldap section.

The Keystone Barclamp: Raw Mode
Figure 10.7: The Keystone Barclamp: Raw Mode

Adjust the settings according to your LDAP setup. The default configuration does not include all attributes that can be set. You'll find a complete list of options is available in the file /opt/dell/chef/data_bags/crowbar/bc-template-keystone.schema on the Administration Server (search for ldap). There are three types of attribute values: strings (for example, the value for url:"ldap://localhost"), bool (for example, the value for use_dumb_member: false) and integer (for example, the value for page_size: 0). Attribute names and string values always need to be quoted with double quotes; bool and integer values must not be quoted.

Important
Important: Using LDAP over SSL (ldaps) Is Recommended

In a production environment, it is recommended to use LDAP over SSL (ldaps), otherwise passwords will be transferred as plain text.

10.4.1.2 Using Hybrid Authentication

The Hybrid LDAP back-end allows to create a mixed LDAP/SQL setup. This is especially useful when an existing LDAP server should be used to authenticate cloud users. The system and service users (administrators and operators) needed to set up and manage SUSE OpenStack Cloud will be managed in the local SQL database. Assignments of users to projects and roles will also be stored in the local database.

In this scenario the LDAP Server can be read-only for SUSE OpenStack Cloud installation and no Schema modifications are required. Therefore managing LDAP users from within SUSE OpenStack Cloud is not possible and needs to be done using your established tools for LDAP user management. All user that are create with the Keystone command line client or the Horizon Web UI will be stored in the local SQL database.

To configure hybrid authentication, proceed as follows:

  1. Open the Keystone barclamp Attribute configuration in Raw mode (see Figure 10.7, “The Keystone Barclamp: Raw Mode”).

  2. Set the identity and assignment drivers to the hybrid back-end:

     "identity": {
        "driver": "hybrid"
      },
      "assignment": {
        "driver": "hybrid"
      }
  3. Adjust the settings according to your LDAP setup in the ldap section. Since the LDAP back-end is only used to acquire information on users (but not on projects and roles), only the user-related settings matter here. See the following example of settings that may need to be adjusted:

      "ldap": {
        "url": "ldap://localhost",
        "user": "",
        "password": "",
        "suffix": "cn=example,cn=com",
        "user_tree_dn": "cn=example,cn=com",
        "query_scope": "one",
        "user_id_attribute": "cn",
        "user_enabled_emulation_dn": "",
        "tls_req_cert": "demand",
        "user_attribute_ignore": "tenant_id,tenants",
        "user_objectclass": "inetOrgPerson",
        "user_mail_attribute": "mail",
        "user_filter": "",
        "use_tls": false,
        "user_allow_create": false,
        "user_pass_attribute": "userPassword",
        "user_enabled_attribute": "enabled",
        "user_enabled_default": "True",
        "page_size": 0,
        "tls_cacertdir": "",
        "tls_cacertfile": "",
        "user_enabled_mask": 0,
        "user_allow_update": true,
        "group_allow_update": true,
        "user_enabled_emulation": false,
        "user_name_attribute": "cn"
        "group_ad_nesting": false,
        "use_pool": true,
        "pool_size": 10,
        "pool_retry_max": 3
      }

    To access the LDAP server anonymously, leave the values for user and password empty.

10.4.2 HA Setup for Keystone

Making Keystone highly available requires no special configuration—it is sufficient to deploy it on a cluster.

10.5 Deploying Ceph (optional)

Ceph adds a redundant block storage service to SUSE OpenStack Cloud. It lets you store persistent devices that can be mounted from instances. It offers high data security by storing the data redundantly on a pool of Storage Node. Therefore Ceph needs to be installed on at least three dedicated nodes. All Ceph nodes need to run SLES 12. For detailed information on how to provide the required repositories, refer to Section 5.2, “Update and Pool Repositories”. If deploying the optional Calamari server for Ceph management and monitoring, an additional node is required.

For more information on the Ceph project, visit http://ceph.com/.

Tip
Tip: SUSE Enterprise Storage

SUSE Enterprise Storage is a robust cluster solution based on Ceph. Refer to https://www.suse.com/documentation/ses-4/ for more information.

The Ceph barclamp has the following configuration options:

Disk Selection Method

Choose whether to only use the first available disk or all available disks. Available disks are all disks currently not used by the system. Note that one disk (usually /dev/sda) of every block storage node is already used for the operating system and is not available for Ceph.

Number of Replicas of an Object

For data security, stored objects are not only stored once, but redundantly. Specify the number of copies that should be stored for each object with this setting. The number includes the object itself. If you for example want the object plus two copies, specify 3.

SSL Support for RadosGW

Choose whether to encrypt public communication (HTTPS) or not (HTTP). If choosing HTTPS, you need to specify the locations for the certificate key pair files. Note that both trusted and self-signed certificates are accepted.

Calamari Credentials

Calamari is a Web front-end for managing and analyzing the Ceph cluster. Provide administrator credentials (user name, password, e-mail address) in this section. When Ceph has bee deployed you can log in to Calamari with these credentials. Deploying Calamari is optional—leave these text boxes empty when not deploying Calamari.

The Ceph Barclamp
Figure 10.8: The Ceph Barclamp

The Ceph component consists of the following different roles:

Important
Important: Dedicated Nodes

We do not recommend sharing one node by more Ceph components at the same time. For example, running a ceph-mon service on the same node as ceph-osd degrades the performance of all services hosted on the shared node. This also applies to other services, such as Calamari or RADOS Gateway.

ceph-osd

The virtual block storage service. Install this role on all dedicated Ceph Storage Nodes (at least three).

ceph-mon

Cluster monitor daemon for managing the storage map of the Ceph cluster. ceph-mon needs to be installed on at least three nodes.

ceph-calamari

Sets up the Calamari Web interface which lets you manage the Ceph cluster. Deploying it is optional. The Web interface can be accessed via http://IP-ADDRESS/ (where IP-ADDRESS is the address of the machine where ceph-calamari is deployed on).

ceph-radosgw

The HTTP REST gateway for Ceph. Visit https://www.suse.com/documentation/ses-4/book_storage_admin/data/cha_ceph_gw.html for more detailed information.

Tip
Tip: RADOS Gateway HA Setup

If you need to set up more RADOS Gateways (and thus create a backup instance in case one RADOS Gateway node fails), set up RADOS Gateway on multiple nodes and put an HTTP load balancer in front of them. You can choose your preferred balancing solution, or use SUSE Linux Enterprise HA extension (refer to https://www.suse.com/documentation/sle-ha-12/).

ceph-mds

The metadata server for the CephFS distributed file system. Install this role on one to three nodes to enable CephFS. A file system named cephfs will automatically be created, along with cephfs_metadata and cephfs_data pools. Refer to https://www.suse.com/documentation/ses-3/book_storage_admin/data/cha_ceph_cephfs.html for more details.

Important
Important: Use Dedicated Nodes

Never deploy on a node that runs non-Ceph OpenStack components. The only services that may be deployed together on a Ceph node, are ceph-osd, ceph-mon and ceph-radosgw. However, we recommend running each Ceph service on a dedicated host for performance reasons. All Ceph nodes need to run SLES 12.

The Ceph Barclamp: Node Deployment Example
Figure 10.9: The Ceph Barclamp: Node Deployment Example

10.5.1 HA Setup for Ceph

Ceph is HA-enabled by design, so there is no need for a special HA setup.

10.6 Deploying Swift (optional)

Swift adds an object storage service to SUSE OpenStack Cloud that lets you store single files such as images or snapshots. It offers high data security by storing the data redundantly on a pool of Storage Nodes—therefore Swift needs to be installed on at least two dedicated nodes.

To be able to properly configure Swift it is important to understand how it places the data. Data is always stored redundantly within the hierarchy. The Swift hierarchy in SUSE OpenStack Cloud is formed out of zones, nodes, hard disks, and logical partitions. Zones are physically separated clusters, for example different server rooms each with its own power supply and network segment. A failure of one zone must not affect another zone. The next level in the hierarchy are the individual Swift storage nodes (on which swift-storage has been deployed) followed by the hard disks. Logical partitions come last.

Swift automatically places three copies of each object on the highest hierarchy level possible. If three zones are available, the each copy of the object will be placed in a different zone. In a one zone setup with more than two nodes, the object copies will each be stored on a different node. In a one zone setup with two nodes, the copies will be distributed on different hard disks. If no other hierarchy element fits, logical partitions are used.

The following attributes can be set to configure Swift:

Allow Public Containers

Allows to enable public access to containers if set to true.

Enable Object Versioning

If set to true, a copy of the current version is archived, each time an object is updated.

Zones

Number of zones (see above). If you do not have different independent installations of storage nodes, set the number of zones to 1.

Create 2^X Logical Partitions

Partition power. The number entered here is used to compute the number of logical partitions to be created in the cluster. The number you enter is used as a power of 2 (2^X).

It is recommended to use a minimum of 100 partitions per disk. To measure the partition power for your setup, do the following: Multiply the number of disks from all Swift nodes with 100 and then round up to the nearest power of two. Keep in mind that the first disk of each node is not used by Swift, but rather for the operating system.

Example: 10 Swift nodes with 5 hard disks each.  Four hard disks on each node are used for Swift, so there is a total of forty disks. Multiplied by 100 gives 4000. The nearest power of two, 4096, equals 2^12. So the partition power that needs to be entered is 12.

Important
Important: Value Cannot be Changed After the Proposal Has Been Deployed

Changing the number of logical partition after Swift has been deployed is not supported. Therefore the value for the partition power should be calculated from the maximum number of partitions this cloud installation is likely going to need at any point in time.

Minimum Hours before Partition is reassigned

This option sets the number of hours before a logical partition is considered for relocation. 24 is the recommended value.

Replicas

The number of copies generated for each object. Set this value to 3, the tested and recommended value.

Replication interval (in seconds)

Time (in seconds) after which to start a new replication process.

Debug

Shows debugging output in the log files when set to true.

SSL Support: Protocol

Choose whether to encrypt public communication (HTTPS) or not (HTTP). If choosing HTTPS, you have two choices. You can either Generate (self-signed) certificates or provide the locations for the certificate key pair files. Using self-signed certificates is for testing purposes only and should never be used in production environments!

The Swift Barclamp
Figure 10.10: The Swift Barclamp

Apart from the general configuration described above, the Swift barclamp lets you also activate and configure Additional Middlewares. The features these middleware provide can be used via the Swift command line client only. The Ratelimit and S3 middleware certainly provide for the most interesting features, whereas it is recommended to only enable further middleware for specific use-cases.

S3 Middleware

Provides an S3 compatible API on top of Swift.

StaticWeb

Enables to serve container data as a static Web site with an index file and optional file listings. See http://docs.openstack.org/developer/swift/middleware.html#staticweb for details.

This middleware requires to set Allow Public Containers to true.

TempURL

Enables to create URLs to provide time limited access to objects. See http://docs.openstack.org/developer/swift/middleware.html#tempurl for details.

FormPOST

Enables to upload files to a container via Web form. See http://docs.openstack.org/developer/swift/middleware.html#formpost for details.

Bulk

Enables the possibility to extract tar files into a swift account and to delete multiple objects or containers with a single request. See http://docs.openstack.org/developer/swift/middleware.html#module-swift.common.middleware.bulk for details.

Cross-domain

Allows to interact with the Swift API via Flash, Java and Silverlight from an external network. See http://docs.openstack.org/developer/swift/middleware.html#module-swift.common.middleware.crossdomain for details.

Domain Remap

Translates container and account parts of a domain to path parameters that the Swift proxy server understands. Can be used to create short URLs that are easy to remember, for example by rewriting home.tux.example.com/$ROOT/tux/home/myfile to home.tux.example.com/myfile. See http://docs.openstack.org/developer/swift/middleware.html#module-swift.common.middleware.domain_remap for details.

Ratelimit

Ratelimit enables you to throttle resources such as requests per minute to provide denial of service protection. See http://docs.openstack.org/developer/swift/middleware.html#module-swift.common.middleware.ratelimit for details.

The Swift component consists of four different roles. Deploying swift-dispersion is optional:

swift-storage

The virtual object storage service. Install this role on all dedicated Swift Storage Nodes (at least two), but not on any other node.

Warning
Warning: swift-storage Needs Dedicated Machines

Never install the swift-storage service on a node that runs other OpenStack components.

swift-ring-compute

The ring maintains the information about the location of objects, replicas, and devices. It can be compared to an index, that is used by various OpenStack components to look up the physical location of objects. swift-ring-compute must only be installed on a single node; it is recommended to use a Control Node.

swift-proxy

The Swift proxy server takes care of routing requests to Swift. Installing a single instance of swift-proxy on a Control Node is recommended. The swift-proxy role can be made highly available by deploying it on a cluster.

swift-dispersion

Deploying swift-dispersion is optional. The Swift dispersion tools can be used to test the health of the cluster. It creates a heap of dummy objects (using 1% of the total space available). The state of these objects can be queried using the swift-dispersion-report query. swift-dispersion needs to be installed on a Control Node.

The Swift Barclamp: Node Deployment Example
Figure 10.11: The Swift Barclamp: Node Deployment Example

10.6.1 HA Setup for Swift

Swift replicates by design, so there is no need for a special HA setup. Make sure to fulfill the requirements listed in Section 2.6.4.1, “Swift—Avoiding Points of Failure”.

10.7 Deploying Glance

Glance provides discovery, registration, and delivery services for virtual disk images. An image is needed to start an instance—it is its pre-installed root-partition. All images you want to use in your cloud to boot instances from, are provided by Glance. Glance must be deployed onto a Control Node. Glance can be made highly available by deploying it on a cluster.

There are a lot of options to configure Glance. The most important ones are explained below—for a complete reference refer to http://github.com/crowbar/crowbar/wiki/Glance--barclamp.

Important
Important: Glance API Versions

As of SUSE OpenStack Cloud 7, the Glance API v1 is no longer enabled by default. Instead, Glance API v2 is used by default.

If you need to re-enable API v1 for compatibility reasons:

  1. Switch to the Raw view of the Glance barclamp.

  2. Search for the enable_v1 entry and set it to true:

    "enable_v1": true

    In new installations, this entry is set to false by default. When upgrading from an older version of SUSE OpenStack Cloud it is set to trueby default.

  3. Apply your changes.

Image Storage: Default Storage Store

Choose whether to use Swift or Ceph (Rados) to store the images. If you have deployed neither of these services, the images can alternatively be stored in an image file on the Control Node (File). If you have deployed Swift or Ceph, it is recommended to use it for Glance as well.

If using VMware as a hypervisor, it is recommended to use it for storing images, too (VMWare). This will make starting VMware instances much faster.

Depending on the storage back-end, there are additional configuration options available:

File Store Parameters

Image Store Directory

Specify the directory to host the image file. The directory specified here can also be an NFS share. See Section 9.4.3, “Mounting NFS Shares on a Node” for more information.

Swift Store Parameters

Swift Container

Set the name of the container to use for the images in Swift.

RADOS Store Parameters

RADOS User for CephX Authentication

If using a SUSE OpenStack Cloud internal Ceph setup, the user you specify here is created in case it does not exist. If using an external Ceph cluster, specify the user you have set up for Glance (see Section 9.4.4, “Using an Externally Managed Ceph Cluster” for more information).

RADOS Pool for Glance images

If using a SUSE OpenStack Cloud internal Ceph setup, the pool you specify here is created in case it does not exist. If using an external Ceph cluster, specify the pool you have set up for Glance (see Section 9.4.4, “Using an Externally Managed Ceph Cluster” for more information).

VMWare Store Parameters

vCenter Host/IP Address

Name or IP address of the vCenter server.

vCenter Username / vCenter Password

vCenter login credentials.

Datastores for Storing Images

A comma-separated list of datastores specified in the format: DATACENTER_NAME:DATASTORE_NAME

Path on the datastore, where the glance images will be stored

Specify an absolute path here.

SSL Support: Protocol

Choose whether to encrypt public communication (HTTPS) or not (HTTP). If choosing HTTPS, refer to SSL Support: Protocol for configuration details.

Caching

Enable and configure image caching in this section. By default, image caching is disabled. Learn more about Glance's caching feature at http://docs.openstack.org/developer/glance/cache.html.

Logging: Verbose Logging

Shows debugging output in the log files when set to true.

The Glance Barclamp
Figure 10.12: The Glance Barclamp

10.7.1 HA Setup for Glance

Glance can be made highly available by deploying it on a cluster. It is also strongly recommended to do so for the image data, too. The recommended way to achieve this is to use Swift or an external Ceph cluster for the image repository. If using a directory on the node instead (file storage back-end), you should set up shared storage on the cluster for it.

10.8 Deploying Cinder

Cinder, the successor of Nova Volume, provides volume block storage. It adds persistent storage to an instance that will persist until deleted (contrary to ephemeral volumes that will only persist while the instance is running).

Cinder can provide volume storage by using different back-ends such as local file, one or more local disks, Ceph (RADOS), VMware or network storage solutions from EMC, EqualLogic, Fujitsu or NetApp. Since SUSE OpenStack Cloud 5, Cinder supports using several back-ends simultaneously. It is also possible to deploy the same network storage back-end multiple times and therefore use different installations at the same time.

The attributes that can be set to configure Cinder depend on the back-end. The only general option is SSL Support: Protocol (see SSL Support: Protocol for configuration details).

Tip
Tip: Adding or Changing a Back-End

When first opening the Cinder barclamp, the default proposal—Raw Devices— is already available for configuration. To optionally add a back-end, go to the section Add New Cinder Back-End and choose a Type Of Volume from the drop-down box. Optionally, specify the Name for the Backend. This is recommended when deploying the same volume type more than once. Existing back-end configurations (including the default one) can be deleted by clicking the trashcan icon if no longer needed. Note that at least one back-end must be configured.

Raw devices (local disks)

Disk Selection Method

Choose whether to only use the First Available disk or All Available disks. Available disks are all disks, currently not used by the system. Note that one disk (usually /dev/sda) of every block storage node is already used for the operating system and is not available for Cinder.

Name of Volume

Specify a name for the Cinder volume.

EMC (EMC² Storage)

IP address of the ECOM server / Port of the ECOM server

IP address and Port of the ECOM server.

User Name / Password for accessing the ECOM server

Login credentials for the ECOM server.

VMAX port groups to expose volumes managed by this backend

VMAX port groups that expose volumes managed by this back-end.

Serial number of the VMAX Array

Unique VMAX array serial number.

Pool name within a given array

Unique pool name within a given array.

FAST Policy name to be used

Name of the FAST Policy to be used. When specified, volumes managed by this back-end are managed as under FAST control.

For more information on the EMC driver refer to the OpenStack documentation at http://docs.openstack.org/liberty/config-reference/content/emc-vmax-driver.html.

EqualLogic

EqualLogic drivers are included as a technology preview and are not supported.

Fujitsu ETERNUS DX

Connection Protocol

Select the protocol used to connect, either FibreChannel or iSCSI.

IP / Port for SMI-S

IP address and port of the ETERNUS SMI-S Server.

Username / Password for SMI-S

Login credentials for the ETERNUS SMI-S Server.

Snapshot (Thick/RAID Group) Pool Name

Storage pool (RAID group) in which the volumes are created. Make sure to have created that RAID group on the server in advance. If a RAID group that does not exist is specified, the RAID group is created by using unused disk drives. The RAID level is automatically determined by the ETERNUS DX Disk storage system.

Hitachi HUSVM

For information on configuring the Hitachi HUSVM back-end, refer to http://docs.openstack.org/newton/config-reference/block-storage/drivers/hitachi-storage-volume-driver.html.

NetApp

Storage Family Type/Storage Protocol

SUSE OpenStack Cloud can either use Data ONTAP in 7-Mode or in Clustered Mode. In 7-Mode vFiler will be configured, in Clustered Mode vServer will be configured. The Storage Protocoll can either be set to iSCSI or NFS. Choose the driver and the protocol your NetApp is licensed for.

Server host name

The management IP address for the 7-Mode storage controller or the cluster management IP address for the clustered Data ONTAP.

Transport Type

Transport protocol for communicating with the storage controller or clustered Data ONTAP. Supported protocols are HTTP and HTTPS. Choose the protocol your NetApp is licensed for.

Server port

The port to use for communication. Port 80 is usually used for HTTP, 443 for HTTPS.

User Name/Password for Accessing NetApp

Login credentials.

The vFiler Unit Name for provisioning OpenStack volumes (netapp_vfiler)

The vFiler unit to be used for provisioning of OpenStack volumes. This setting is only available in 7-Mode.

Restrict provisioning on iSCSI to these volumes (netapp_volume_list)

Provide a list of comma-separated volumes names to be used for provisioning. This setting is only available when using iSCSI as storage protocol.

NFS

List of NFS Exports

A list of accessible physical file systems on an NFS server.

Mount Options

Additional options for mounting NFS exports.

RADOS (Ceph)

Use Ceph Deployed by Crowbar

Select true if you have deployed Ceph with SUSE OpenStack Cloud. In case you are using an external Ceph cluster (see Section 9.4.4, “Using an Externally Managed Ceph Cluster” for setup instructions), select false.

RADOS pool for Cinder volumes

Name of the pool used to store the Cinder volumes.

RADOS user (Set Only if Using CephX authentication)

Ceph user name.

VMware Parameters

vCenter Host/IP Address

Host name or IP address of the vCenter server.

vCenter Username / vCenter Password

vCenter login credentials.

vCenter Cluster Names for Volumes

Provide a comma-separated list of cluster names.

Folder for Volumes

Path to the directory used to store the Cinder volumes.

CA file for verifying the vCenter certificate

Absolute path to the vCenter CA certificate.

vCenter SSL Certificate is insecure (for instance, self-signed)

Default value: false (the CA truststore is used for verification). Set this option to true when using self-signed certificates to disable certificate checks. This setting is for testing purposes only and must not be used in production environments!

Local file

Volume File Name

Absolute path to the file to be used for block storage.

Maximum File Size (GB)

Maximum size of the volume file. Make sure not to overcommit the size, since it will result in data loss.

Name of Volume

Specify a name for the Cinder volume.

Note
Note: Using Local File for Block Storage

Using a file for block storage is not recommended for production systems, because of performance and data security reasons.

Other driver

Lets you manually pick and configure a driver. Only use this option for testing purposes, it is not supported.

The Cinder Barclamp
Figure 10.13: The Cinder Barclamp

The Cinder component consists of two different roles:

cinder-controller

The Cinder controller provides the scheduler and the API. Installing cinder-controller on a Control Node is recommended.

cinder-volume

The virtual block storage service. It can be installed on a Control Node. However, it is recommended to deploy it on one or more dedicated nodes supplied with sufficient networking capacity to handle the increased in network traffic.

The Cinder Barclamp: Node Deployment Example
Figure 10.14: The Cinder Barclamp: Node Deployment Example

10.8.1 HA Setup for Cinder

While the cinder-controller role can be deployed on a cluster, deploying cinder-volume on a cluster is not supported. Therefore it is generally recommended to deploy cinder-volume on several nodes—this ensures the service continues to be available even when a node fails. In addition with Ceph or a network storage solution, such a setup minimizes the potential downtime.

If using Ceph or a network storage is not an option, you need to set up a shared storage directory (for example, with NFS), mount it on all cinder volume nodes and use the Local File back-end with this shared directory. Using Raw Devices is not an option, since local disks cannot be shared.

10.9 Deploying Neutron

Neutron provides network connectivity between interface devices managed by other OpenStack components (most likely Nova). The service works by enabling users to create their own networks and then attach interfaces to them.

Neutron must be deployed on a Control Node. You first need to choose a core plug-in—ml2 or vmware. Depending on your choice, more configuration options will become available.

The vmware option lets you use an existing VMWare NSX installation. Using this plugin is not a prerequisite for the VMWare vSphere hypervisor support. However, it is needed when wanting to have security groups supported on VMWare compute nodes. For all other scenarios, choose ml2.

The only global option that can be configured is SSL Support. Choose whether to encrypt public communication (HTTPS) or not (HTTP). If choosing HTTPS, refer to SSL Support: Protocol for configuration details.

ml2 (Modular Layer 2)

Modular Layer 2 Mechanism Drivers

Select which mechanism driver(s) shall be enabled for the ml2 plugin. It is possible to select more than one driver by holding the Ctrl key while clicking. Choices are:

openvswitch Supports GRE, VLAN and VLANX networks (to be configured via the Modular Layer 2 type drivers setting).

linuxbridge Supports VLANs only. Requires to specify the Maximum Number of VLANs.

cisco_nexus Enables Neutron to dynamically adjust the VLAN settings of the ports of an existing Cisco Nexus switch when instances are launched. It also requires openvswitch which will automatically be selected. With Modular Layer 2 type drivers, vlan must be added. This option also requires to specify the Cisco Switch Credentials. See Appendix H, Using Cisco Nexus Switches with Neutron for details.

Use Distributed Virtual Router Setup

With the default setup, all intra-Compute Node traffic flows through the network Control Node. The same is true for all traffic from floating IPs. In large deployments the network Control Node can therefore quickly become a bottleneck. When this option is set to true, network agents will be installed on all compute nodes. This will de-centralize the network traffic, since Compute Nodes will be able to directly talk to each other. Distributed Virtual Routers (DVR) require the openvswitch driver and will not work with the linuxbridge driver. HyperV Compute Nodes will not be supported—network traffic for these nodes will be routed via the Control Node on which neutron-network is deployed. For details on DVR refer to https://wiki.openstack.org/wiki/Neutron/DVR.

Modular Layer 2 Type Drivers

This option is only available when having chosen the openvswitch or the cisco_nexus mechanism drivers. Options are vlan, gre and vxlan. It is possible to select more than one driver by holding the Ctrl key while clicking.

When multiple type drivers are enabled, you need to select the Default Type Driver for Provider Network, that will be used for newly created provider networks. This also includes the nova_fixed network, that will be created when applying the Neutron proposal. When manually creating provider networks with the neutron command, the default can be overwritten with the --provider:network_type type switch. You will also need to set a Default Type Driver for Tenant Network. It is not possible to change this default when manually creating tenant networks with the neutron command. The non-default type driver will only be used as a fallback.

Depending on your choice of the type driver, more configuration options become available.

gre Having chosen gre, you also need to specify the start and end of the tunnel ID range.

vlan The option vlan requires you to specify the Maximum number of VLANs.

vxlan Having chosen vxlan, you also need to specify the start and end of the VNI range.

Important
Important: Drivers for HyperV Compute Nodes

HyperV Compute Nodes do not support gre and vxlan. If your environment includes a heterogeneous mix of Compute Nodes including HyperV nodes, make sure to select vlan. This can be done in addition to the other drivers.

Important
Important: Drivers for the VMware Compute Node

Neutron must not be deployed with the openvswitch with gre plug-in. See Appendix G, VMware vSphere Installation Instructions for details.

z/VM Configuration

xCAT Host/IP Address

Host name or IP address of the xCAT Management Node.

xCAT Username/Password

xCAT login credentials.

rdev list for physnet1 vswitch uplink (if available)

List of rdev addresses that should be connected to this vswitch.

xCAT IP Address on Management Network

IP address of the xCAT management interface.

Net Mask of Management Network

Net mask of the xCAT management interface.

vmware

This plug-in requires to configure access to the VMWare NSX service.

VMWare NSX User Name/Password

Login credentials for the VMWare NSX server. The user needs to have administrator permissions on the NSX server.

VMWare NSX Controllers

Enter the IP address and the port number (IP-ADDRESS:PORT) of the controller API endpoint. If the port number is omitted, port 443 will be used. You may also enter multiple API endpoints (comma-separated), provided they all belong to the same controller cluster. When multiple API endpoints are specified, the plugin will load balance requests on the various API endpoints.

UUID of the NSX Transport Zone/Gateway Service

The UUIDs for the transport zone and the gateway service can be obtained from the NSX server. They will be used when networks are created.

The Neutron Barclamp
Figure 10.15: The Neutron Barclamp

The Neutron component consists of two different roles:

neutron-server

neutron-server provides the scheduler and the API. It needs to be installed on a Control Node.

neutron-network

This service runs the various agents that manage the network traffic of all the cloud instances. It acts as the DHCP and DNS server and as a gateway for all cloud instances. It is recommend to deploy this role on a dedicated node supplied with sufficient network capacity.

The Neutron barclamp
Figure 10.16: The Neutron barclamp

10.9.1 Using Infoblox IPAM Plug-in

In the Neutron barclamp, you can enable support for the infoblox IPAM plug-in and configure it. For configuration, the infoblox section contains the subsections grids and grid_defaults.

grids

This subsection must contain at least one entry. For each entry, the following parameters are required:

  • admin_user_name

  • admin_password

  • grid_master_host

  • grid_master_name

  • data_center_name

You can also add multiple entries to the grids section. However, the upstream infoblox agent only supports a single grid currently.

grid_defaults

This subsection contains the default settings that are used for each grid (unless you have configured specific settings within the grids section).

For detailed information on all infoblox-related configuration settings, see https://github.com/openstack/networking-infoblox/blob/master/doc/source/installation.rst.

Currently, all configuration options for infoblox are only available in the raw mode of the Neutron barclamp. To enable support for the infoblox IPAM plug-in and configure it, proceed as follows:

  1. Edit the Neutron barclamp proposal or create a new one.

  2. Click Raw and search for the following section:

    "use_infoblox": false,
  3. To enable support for the infoblox IPAM plug-in, change this entry to:

    "use_infoblox": true,
  4. In the grids section, configure at least one grid by replacing the example values for each parameter with real values.

  5. If you need specific settings for a grid, add some of the parameters from the grid_defaults section to the respective grid entry and adjust their values.

    Otherwise Crowbar applies the default setting to each grid when you save the barclamp proposal.

  6. Save your changes and apply them.

10.9.2 HA Setup for Neutron

Neutron can be made highly available by deploying neutron-server and neutron-network on a cluster. While neutron-server may be deployed on a cluster shared with other services, it is strongly recommended to use a dedicated cluster solely for the neutron-network role.

10.10 Deploying Nova

Nova provides key services for managing the SUSE OpenStack Cloud, sets up the Compute Nodes. SUSE OpenStack Cloud currently supports KVM, Xen and Microsoft Hyper V and VMWare vSphere. The unsupported QEMU option is included to enable test setups with virtualized nodes. The following attributes can be configured for Nova:

Scheduler Options: Virtual RAM to Physical RAM allocation ratio

Set the overcommit ratio for RAM for instances on the Compute Nodes. A ratio of 1.0 means no overcommitment. Changing this value is not recommended.

Scheduler Options: Virtual CPU to Physical CPU allocation ratio

Set the overcommit ratio for CPUs for instances on the Compute Nodes. A ratio of 1.0 means no overcommitment.

Scheduler Options: Virtual Disk to Physical Disk allocation ratio

Set the overcommit ratio for virtual disks for instances on the Compute Nodes. A ratio of 1.0 means no overcommitment.

Scheduler Options: Reserved Memory for Nova Compute hosts (MB)

Amount of reserved host memory that is not used for allocating VMs by Nova Compute.

Live Migration Support: Enable Libvirt Migration

Allows to move KVM and Xen instances to a different Compute Node running the same hypervisor (cross hypervisor migrations are not supported). Useful when a Compute Node needs to be shut down or rebooted for maintenance or when the load of the Compute Node is very high. Instances can be moved while running (Live Migration).

Warning
Warning: Libvirt Migration and Security

Enabling the libvirt migration option will open a TCP port on the Compute Nodes that allows access to all instances from all machines in the admin network. Ensure that only authorized machines have access to the admin network when enabling this option.

Live Migration Support: Setup Shared Storage

Sets up a directory /var/lib/nova/instances on the Control Node on which nova-controller is running. This directory is exported via NFS to all compute nodes and will host a copy of the root disk of all Xen instances. This setup is required for live migration of Xen instances (but not for KVM) and is used to provide central handling of instance data. Enabling this option is only recommended if Xen live migration is required—otherwise it should be disabled.

Warning
Warning: Do Not Set Up Shared Storage When instances are Running

Setting up shared storage in a SUSE OpenStack Cloud where instances are running will result in connection losses to all running instances. It is strongly recommended to set up shared storage when deploying SUSE OpenStack Cloud. If it needs to be done at a later stage, make sure to shut down all instances prior to the change.

KVM Options: Enable Kernel Samepage Merging

Kernel SamePage Merging (KSM) is a Linux Kernel feature which merges identical memory pages from multiple running processes into one memory region. Enabling it optimizes memory usage on the Compute Nodes when using the KVM hypervisor at the cost of slightly increasing CPU usage.

z/VM Configuration: xCAT Host/IP Address

IP address of the xCAT management interface.

z/VM Configuration: xCAT Username/Password

xCAT login credentials.

z/VM Configuration: z/VM disk pool for ephemeral disks

Name of the disk pool for ephemeral disks.

z/VM Configuration: z/VM disk pool type for ephemeral disks (ECKD or FBA)

Choose disk pool type for ephemeral disks.

z/VM Configuration: z/VM Host Managed By xCAT MN

z/VM host managed by xCAT Management Node.

z/VM Configuration: User profile for creating a z/VM userid

User profile to be used for creating a z/VM userid.

z/VM Configuration: Default zFCP SCSI Disk Pool

Default zFCP SCSI disk pool.

z/VM Configuration: The xCAT MN node name

Name of the xCAT Management Node.

z/VM Configuration: The xCAT MN node public SSH key

Public SSH key of the xCAT Management Node.

VMware vCenter Settings

Setting up VMware support is described in a separate section. See Appendix G, VMware vSphere Installation Instructions.

SSL Support: Protocol

Choose whether to encrypt public communication (HTTPS) or not (HTTP). If choosing HTTPS,refer to SSL Support: Protocol for configuration details.

VNC Settings: Keymap

Change the default VNC keymap for instances. By default, en-us is used. Enter the value in lowercase, either as a two character code (such as de or jp) or, as a five character code such as de-ch or en-uk, if applicable.

VNC Settings: NoVNC Protocol

After having started an instance you can display its VNC console in the OpenStack Dashboard (Horizon) via the browser using the noVNC implementation. By default this connection is not encrypted and can potentially be eavesdropped.

Enable encrypted communication for noVNC by choosing HTTPS and providing the locations for the certificate key pair files.

Logging: Verbose Logging

Shows debugging output in the log files when set to true.

Note
Note: Custom Vendor Data for Instances

You can pass custom vendor data to all VMs via Nova's metadata server. For example, information about a custom SMT server can be used by the SUSE guest images to automatically configure the repositories for the guest.

  1. To pass custom vendor data, switch to the Raw view of the Nova barclamp.

  2. Search for the following section:

    "metadata": {
      "vendordata": {
        "json": "{}"
      }
    }
  3. As value of the json entry, enter valid JSON data. For example:

    "metadata": {
      "vendordata": {
        "json": "{\"CUSTOM_KEY\": \"CUSTOM_VALUE\"}"
      }
    }

    The string needs to be escaped because the barclamp file is in JSON format, too.

Use the following command to access the custom vendor data from inside a VM:

curl -s http://METADATA_SERVER/openstack/latest/vendor_data.json

The IP address of the metadata server is always the same from within a VM. For more details, see https://www.suse.com/communities/blog/vms-get-access-metadata-neutron/.

The Nova Barclamp
Figure 10.17: The Nova Barclamp

The Nova component consists of eight different roles:

nova-controller

Distributing and scheduling the instances is managed by the nova-controller. It also provides networking and messaging services. nova-controller needs to be installed on a Control Node.

nova-compute-kvm / nova-compute-qemu / nova-compute-vmware / nova-compute-xen / nova-compute-zvm

Provides the hypervisors (KVM, QEMU, VMware vSphere, Xen, and z/VM) and tools needed to manage the instances. Only one hypervisor can be deployed on a single compute node. To use different hypervisors in your cloud, deploy different hypervisors to different Compute Nodes. A nova-compute-* role needs to be installed on every Compute Node. However, not all hypervisors need to be deployed.

Each image that will be made available in SUSE OpenStack Cloud to start an instance is bound to a hypervisor. Each hypervisor can be deployed on multiple Compute Nodes (except for the VMWare vSphere role, see below). In a multi-hypervisor deployment you should make sure to deploy the nova-compute-* roles in a way, that enough compute power is available for each hypervisor.

Note
Note: Re-assigning Hypervisors

Existing nova-compute-* nodes can be changed in a production SUSE OpenStack Cloud without service interruption. You need to evacuate the node, re-assign a new nova-compute role via the Nova barclamp and Apply the change. nova-compute-vmware can only be deployed on a single node.

Important
Important: Deploying VMware vSphere (vmware)

VMware vSphere is not supported natively by SUSE OpenStack Cloud—it rather delegates requests to an existing vCenter. It requires preparations at the vCenter and post install adjustments of the Compute Node. See Appendix G, VMware vSphere Installation Instructions for instructions. nova-compute-vmware can only be deployed on a single Compute Node.

The Nova Barclamp: Node Deployment Example with Two KVM Nodes
Figure 10.18: The Nova Barclamp: Node Deployment Example with Two KVM Nodes

10.10.1 HA Setup for Nova

Making nova-controller highly available requires no special configuration—it is sufficient to deploy it on a cluster.

To enable High Availability for Compute Nodes, deploy the following roles to one or more clusters with remote nodes:

  • nova-compute-kvm

  • nova-compute-qemu

  • nova-compute-xen

The cluster to which you deploy the roles above can be completely independent of the one to which the role nova-controller is deployed.

Tip
Tip: Shared Storage

It is recommended to use shared storage for the /var/lib/nova/instances directory. If an external NFS server is used, enable the following option in the Nova barclamp proposal: Shared Storage for Nova instances has been manually configured.

10.11 Deploying Horizon (OpenStack Dashboard)

The last component that needs to be deployed is Horizon, the OpenStack Dashboard. It provides a Web interface for users to start and stop instances and for administrators to manage users, groups, roles, etc. Horizon should be installed on a Control Node. To make Horizon highly available, deploy it on a cluster.

The following attributes can be configured:

Session Timeout

Timeout (in minutes) after which a user is been logged out automatically. The default value is set to four hours (240 minutes).

Note
Note: Timeouts Larger than Four Hours

Every Horizon session requires a valid Keystone token. These tokens also have a lifetime of for hours (14400 seconds). Setting the Horizon session timeout to a value larger than 240 will therefore have no effect, and you will receive a warning when applying the barclamp.

To successfully apply a timeout larger than four hours, you first need to adjust the Keystone token expiration accordingly. To do so, open the Keystone barclamp in Raw mode and adjust the value of the key token_expiration. Note that the value has to be provided in seconds. When the change is successfully applied, you can adjust the Horizon session timeout (in minutes). Note that extending the Keystone token expiration may cause scalability issues in large and very busy SUSE OpenStack Cloud installations.

User Password Validation: Regular expression used for password validation

Specify a regular expression with which to check the password. The default expression (.{8,}) tests for a minimum length of 8 characters. The string you enter is interpreted as a Python regular expression (see http://docs.python.org/2.7/library/re.html#module-re for a reference).

User Password Validation: Text to display if the password does not pass validation

Error message that will be displayed in case the password validation fails.

SSL Support: Protocol

Choose whether to encrypt public communication (HTTPS) or not (HTTP). If choosing HTTPS, you have two choices. You can either Generate (self-signed) certificates or provide the locations for the certificate key pair files and,—optionally— the certificate chain file. Using self-signed certificates is for testing purposes only and should never be used in production environments!

The Horizon Barclamp
Figure 10.19: The Horizon Barclamp

10.11.1 HA Setup for Horizon

Making Horizon highly available requires no special configuration—it is sufficient to deploy it on a cluster.

10.12 Deploying Heat (Optional)

Heat is a template-based orchestration engine that enables you to, for example, start workloads requiring multiple servers or to automatically restart instances if needed. It also brings auto-scaling to SUSE OpenStack Cloud by automatically starting additional instances if certain criteria are met. For more information about Heat refer to the OpenStack documentation at http://docs.openstack.org/developer/heat/.

Heat should be deployed on a Control Node. To make Heat highly available, deploy it on a cluster.

The following attributes can be configured for Heat:

Verbose Logging

Shows debugging output in the log files when set to true.

SSL Support: Protocol

Choose whether to encrypt public communication (HTTPS) or not (HTTP). If choosing HTTPS,refer to SSL Support: Protocol for configuration details.

The Heat Barclamp
Figure 10.20: The Heat Barclamp

10.12.1 Enabling Identity Trusts Authorization (Optional)

Heat uses Keystone Trusts to delegate a subset of user roles to the Heat engine for deferred operations (see Steve Hardy's blog for details ). It can either delegate all user roles or only those specified in the trusts_delegated_roles setting. Consequently, all roles listed in trusts_delegated_roles need to be assigned to a user, otherwise the user will not be able to use Heat.

The recommended setting for trusts_delegated_roles is Member, since this is the default role most users are likely to have. This is also the default setting when installing SUSE OpenStack Cloud from scratch.

On installations where this setting is introduced through an upgrade, trusts_delegated_roles will be set to heat_stack_owner. This is a conservative choice to prevent breakage in situations where unprivileged users may already have been assigned the heat_stack_owner role to enable them to use Heat but lack the Member role. As long as you can ensure that all users who have the heat_stack_owner role also have the Member role, it is both safe and recommended to change trusts_delegated_roles to Member, since the latter is the default role assigned by our hybrid LDAP back-end among others.

To view or change the trusts_delegated_role setting you need to open the Heat barclamp and click Raw in the Attributes section. Search for the trusts_delegated_roles setting and modify the list of roles as desired.

the Heat barclamp: Raw Mode
Figure 10.21: the Heat barclamp: Raw Mode
Warning
Warning: Empty Value

An empty value for trusts_delegated_roles will delegate all of user roles to Heat. This may create a security risk for users who are assigned privileged roles, such as admin, because these privileged roles will also be delegated to the Heat engine when these users create Heat stacks.

10.12.2 HA Setup for Heat

Making Heat highly available requires no special configuration—it is sufficient to deploy it on a cluster.

10.13 Deploying Ceilometer (Optional)

Ceilometer collects CPU and networking data from SUSE OpenStack Cloud. This data can be used by a billing system to enable customer billing. Deploying Ceilometer is optional.

For more information about Ceilometer refer to the OpenStack documentation at http://docs.openstack.org/developer/ceilometer/.

Important
Important: Ceilometer Restrictions

As of SUSE OpenStack Cloud 7 data measuring is only supported for KVM, Xen and Windows instances. Other hypervisors and SUSE OpenStack Cloud features such as object or block storage will not be measured.

The following attributes can be configured for Ceilometer:

Interval used for CPU/disk/network/other meter updates (in seconds)

Specify an interval in seconds after which Ceilometer performs an update of the specified meter.

Evaluation interval for threshold alarms (in seconds)

Set the interval after which to check whether to raise an alarm because a threshold has been exceeded. For performance reasons, do not set a value lower than the default (60s).

Use MongoDB instead of standard database

Ceilometer collects a large amount of data, which is written to a database. In a production system it is recommended to use a separate database for Ceilometer rather than the standard database that is also used by the other SUSE OpenStack Cloud components. MongoDB is optimized to write a lot of data. As of SUSE OpenStack Cloud 7, MongoDB is only included as a technology preview and not supported.

How long are metering/event samples kept in the database (in days)

Specify how long to keep the data. -1 means that samples are kept in the database forever.

Verbose Logging

Shows debugging output in the log files when set to true.

The Ceilometer Barclamp
Figure 10.22: The Ceilometer Barclamp

The Ceilometer component consists of five different roles:

ceilometer-server

The Ceilometer API server role. This role needs to be deployed on a Control Node. Ceilometer collects approximately 200 bytes of data per hour and instance. Unless you have a very huge number of instances, there is no need to install it on a dedicated node.

ceilometer-polling

The polling agent listens to the message bus to collect data. It needs to be deployed on a Control Node. It can be deployed on the same node as ceilometer-server.

ceilometer-agent

The compute agents collect data from the compute nodes. They need to be deployed on all KVM and Xen compute nodes in your cloud (other hypervisors are currently not supported).

ceilometer-swift-proxy-middleware

An agent collecting data from the Swift nodes. This role needs to be deployed on the same node as swift-proxy.

The Ceilometer Barclamp: Node Deployment
Figure 10.23: The Ceilometer Barclamp: Node Deployment

10.13.1 HA Setup for Ceilometer

Making Ceilometer highly available requires no special configuration—it is sufficient to deploy the roles ceilometer-server and ceilometer-polling on a cluster. The cluster needs to consist of an odd number of nodes, otherwise the Ceilometer deployment will fail.

10.14 Deploying Manila

Manila provides coordinated access to shared or distributed file systems, similar to what Cinder does for block storage. These file systems can be shared between instances in SUSE OpenStack Cloud.

Manila uses different back-ends. As of SUSE OpenStack Cloud 7 currently supported back-ends include Hitachi HNAS, NetApp Driver, and CephFS. Two more back-end options, Generic Driver and Other Driver are available for testing purposes and are not supported.

Note
Note: Limitations for CephFS Backend

Manila uses some CephFS features that are currently not supported by the SUSE Linux Enterprise 12 SP2 CephFS kernel client:

  • RADOS namespaces

  • MDS path restrictions

  • Quotas

As a result, to access CephFS shares provisioned by Manila, you must use ceph-fuse. For details, see http://docs.openstack.org/developer/manila/devref/cephfs_native_driver.html.

When first opening the Manila barclamp, the default proposal Generic Driver is already available for configuration. To replace it, first delete it by clicking the trashcan icon and then choose a different back-end in the section Add new Manila Backend. Select a Type of Share and—optionally—provide a Name for Backend. Activate the back-end with Add Backend. Note that at least one back-end must be configured.

The attributes that can be set to configure Cinder depend on the back-end:

Back-end: Generic

The generic driver is included as a technology preview and is not supported.

Hitachi HNAS

Specify which EVS this backend is assigned to

Provide the name of the Enterprise Virtual Server that the selected back-end is assigned to.

Specify IP for mounting shares

IP address for mounting shares.

Specify file-system name for creating shares

Provide a file-system name for creating shares.

HNAS management interface IP

IP address of the HNAS management interface for communication between Manila controller and HNAS.

HNAS username Base64 String

HNAS username Base64 String required to perform tasks like creating file-systems and network interfaces.

HNAS user password

HNAS user password. Required only if private key is not provided.

RSA/DSA private key

RSA/DSA private key necessary for connecting to HNAS. Required only if password is not provided.

The time to wait for stalled HNAS jobs before aborting

Time in seconds to wait before aborting stalled HNAS jobs.

Back-end: Netapp

Name of the Virtual Storage Server (vserver)

Host name of the Virtual Storage Server.

Server Host Name

The name or IP address for the storage controller or the cluster.

Server Port

The port to use for communication. Port 80 is usually used for HTTP, 443 for HTTPS.

User name/Password for Accessing NetApp

Login credentials.

Transport Type

Transport protocol for communicating with the storage controller or cluster. Supported protocols are HTTP and HTTPS. Choose the protocol your NetApp is licensed for.

Back-end: CephFS

Use Ceph deployed by Crowbar

Set to true to use Ceph deployed with Crowbar.

Back-end: Manual

Lets you manually pick and configure a driver. Only use this option for testing purposes, it is not supported.

The Manila Barclamp
Figure 10.24: The Manila Barclamp

The Manila component consists of two different roles:

manila-server

The Manila server provides the scheduler and the API. Installing it on a Control Node is recommended.

manila-share

The shared storage service. It can be installed on a Control Node, but it is recommended to deploy it on one or more dedicated nodes supplied with sufficient disk space and networking capacity, since it will generate a lot of network traffic.

The Manila Barclamp: Node Deployment Example
Figure 10.25: The Manila Barclamp: Node Deployment Example

10.14.1 HA Setup for Manila

While the manila-server role can be deployed on a cluster, deploying manila-share on a cluster is not supported. Therefore it is generally recommended to deploy manila-share on several nodes—this ensures the service continues to be available even when a node fails.

10.15 Deploying Trove (Optional)

Trove is a Database-as-a-Service for SUSE OpenStack Cloud. It provides database instances which can be used by all instances. With Trove being deployed, SUSE OpenStack Cloud users no longer need to deploy and maintain their own database applications. For more information about Trove; refer to the OpenStack documentation at http://docs.openstack.org/developer/trove/.

Important
Important: Technology Preview

Trove is only included as a technology preview and not supported.

Trove should be deployed on a dedicated Control Node.

The following attributes can be configured for Trove:

Enable Trove Volume Support

When enabled, Trove will use a Cinder volume to store the data.

Logging: Verbose

Increases the amount of information that is written to the log files when set to true.

Logging: Debug

Shows debugging output in the log files when set to true.

The Trove Barclamp
Figure 10.26: The Trove Barclamp

10.15.1 HA Setup for Trove

An HA Setup for Trove is currently not supported.

10.16 Deploying Tempest (Optional)

Tempest is an integration test suite for SUSE OpenStack Cloud written in Python. It contains multiple integration tests for validating your SUSE OpenStack Cloud deployment. For more information about Tempest refer to the OpenStack documentation at http://docs.openstack.org/developer/tempest/.

Important
Important: Technology Preview

Tempest is only included as a technology preview and not supported.

Tempest may be used for testing whether the intended setup will run without problems. It should not be used in a production environment.

Tempest should be deployed on a Control Node.

The following attributes can be configured for Tempest:

Choose User name / Password

Credentials for a regular user. If the user does not exist, it will be created.

Choose Tenant

Tenant to be used by Tempest. If it does not exist, it will be created. It is safe to stick with the default value.

Choose Tempest Admin User name/Password

Credentials for an admin user. If the user does not exist, it will be created.

The Tempest Barclamp
Figure 10.27: The Tempest Barclamp
Tip
Tip: Running Tests

To run tests with Tempest, log in to the Control Node on which Tempest was deployed. Change into the directory /var/lib/openstack-tempest-test. To get an overview of available commands, run:

./run_tempest.sh --help

To serially invoke a subset of all tests (the gating smoketests) to help validate the working functionality of your local cloud instance, run the following command. It will save the output to a log file tempest_CURRENT_DATE.log.

./run_tempest.sh --no-virtual-env -serial --smoke 2>&1 \
| tee "tempest_$(date +%Y-%m-%d_%H%M%S).log"

10.16.1 HA Setup for Tempest

Tempest cannot be made highly available.

10.17 Deploying Magnum (Optional)

Magnum is an OpenStack project which offers container orchestration engines for deploying and managing containers as first class resources in OpenStack.

For more information about Magnum, see the OpenStack documentation at http://docs.openstack.org/developer/magnum/.

For information on how to deploy a Kubernetes cluster (either from command line or from the Horizon Dashboard), see the Supplement to Administrator Guide and End User Guide. It is available from https://www.suse.com/documentation/cloud.

The following Attributes can be configured for Magnum:

Logging: Verbose

Increases the amount of information that is written to the log files when set to true.

Logging: Debug

Shows debugging output in the log files when set to true.

Trustee Domain: Domain Name

Domain name to use for creating trustee for bays.

Certificate Manager: Plugin

To store certificates, either use the Barbican OpenStack service, a local directory (Local), or the Magnum Database (x590keypair).

The Magnum Barclamp
Figure 10.28: The Magnum Barclamp

The Magnum barclamp consists of the following roles: magnum-server. It can either be deployed on a Control Node or on a cluster—see Section 10.17.1, “HA Setup for Magnum”. When deploying the role onto a Control Node, additional RAM is required for the Magnum server. It is recommended to only deploy the role to a Control Node that has 16 GB RAM.

10.17.1 HA Setup for Magnum

Making Magnum highly available requires no special configuration. It is sufficient to deploy it on a cluster.

10.18 Deploying Barbican (Optional)

Barbican is a component designed for storing secrets in a secure and standardized manner protected by Keystone authentication. Secrets include SSL certificates and passwords used by various OpenStack components.

Barbican settings can be configured in Raw mode only. To do this, open the Barbican barclamp Attribute configuration in Raw mode.

The Barbican Barclamp: Raw Mode
Figure 10.29: The Barbican Barclamp: Raw Mode

When configuring Barbican, pay particular attention to the following settings:

  • bind_host Bind host for the Barbican API service

  • bind_port Bind port for the Barbican API service

  • processes Number of API processes to run in Apache

  • ssl Enable or disable SSL

  • threads Number of API worker threads

  • debug Enable or disable debug logging

  • enable_keystone_listener Enable or disable the Keystone listener services

  • kek An encryption key (fixed-length 32-byte Base64-encoded value) for Barbican's simple_crypto plugin. If left unspecified, the key will be generated automatically.

    Note
    Note: Existing Encryption Key

    If you plan to restore and use the existing Barbican database after a full reinstall (including a complete wipe of the Crowbar node), make sure to save the specified encryption key beforehand. You will need to provide it after the full reinstall in order to access the data in the restored Barbican database.

10.18.1 HA Setup for Barbican

To make Barbican highly available, assign the barbican-controller role to the Controller Cluster.

10.19 Deploying Sahara

Sahara provides users with simple means to provision data processing frameworks (such as Hadoop, Spark, and Storm) on OpenStack. This is accomplished by specifying configuration parameters such as the framework version, cluster topology, node hardware details, etc.

Logging: Verbose

Set to true to increase the amount of information written to the log files.

The Sahara Barclamp
Figure 10.30: The Sahara Barclamp

10.19.1 HA Setup for Sahara

Making Sahara highly available requires no special configuration. It is sufficient to deploy it on a cluster.

10.20 How to Proceed

With a successful deployment of the OpenStack Dashboard, the SUSE OpenStack Cloud installation is finished. To be able to test your setup by starting an instance one last step remains to be done—uploading an image to the Glance component. Refer to the Supplement to Administrator Guide and End User Guide, chapter Manage images for instructions. Images for SUSE OpenStack Cloud can be built in SUSE Studio. Refer to the Supplement to Administrator Guide and End User Guide, section Building Images with SUSE Studio.

Now you can hand over to the cloud administrator to set up users, roles, flavors, etc.—refer to the Administrator Guide for details. The default credentials for the OpenStack Dashboard are user name admin and password crowbar.

Print this page