After the nodes are installed and configured you can start deploying the OpenStack components to finalize the installation. The components need to be deployed in a given order, because they depend on one another. The component for an HA setup is the only exception from this rule—it can be set up at any time. However, when deploying SUSE OpenStack Cloud from scratch, it is recommended to deploy the proposal(s) first. Deployment for all components is done from the Crowbar Web interface through recipes, so-called “barclamps”.
The components controlling the cloud (including storage management and control components) need to be installed on the Control Node(s) (refer to Section 1.2, “The Control Node(s)” for more information). However, you may not use your Control Node(s) as a compute node or storage host for Swift or Ceph. Here is a list with components that may not be installed on the Control Node(s): , all Ceph components, . These components need to be installed on dedicated nodes.
When deploying an HA setup, the controller nodes are replaced by one or more controller clusters consisting of at least two nodes (three are recommended). Setting up three separate clusters—for data, services, and networking—is recommended. See Section 2.6, “High Availability” for more information on requirements and recommendations for an HA setup.
The OpenStack components need to be deployed in the following order. For general instructions on how to edit and deploy barclamp, refer to Section 8.3, “Deploying Barclamp Proposals”. Deploying Pacemaker (only needed for an HA setup), Swift and Ceph is optional; all other components must be deployed.
To make the SUSE OpenStack Cloud controller functions and the Compute Nodes highly available, set up one or more clusters by deploying Pacemaker (see Section 2.6, “High Availability” for details). Since it is possible (and recommended) to deploy more than one cluster, a separate proposal needs to be created for each cluster.
Deploying Pacemaker is optional. In case you do not want to deploy it, skip this section and start the node deployment by deploying the database as described in Section 10.2, “Deploying the Database”.
To set up a cluster, at least two nodes are required. If setting up a cluster for storage with replicated storage via DRBD (for example for a cluster for the database and RabbitMQ), exactly two nodes are required. For all other setups an odd number of nodes with a minimum of three nodes is strongly recommended. See Section 2.6.5, “Cluster Requirements and Recommendations” for more information.
To create a proposal, go to
› and click for the Pacemaker barclamp. A drop-down box where you can enter a name and a description for the proposal opens. Click to open the configuration screen for the proposal.The name you enter for the proposal will be used to generate host names for the virtual IPs of HAProxy. By default, the names follow this scheme:
cluster-PROPOSAL_NAME.FQDN
(for the internal name) |
public.cluster-PROPOSAL_NAME.FQDN
(for the public name) |
For example, when PROPOSAL_NAME is set to
data
, this results in the following names:
cluster-data.example.com
|
public.cluster-data.example.com
|
For requirements regarding SSL encryption and certificates, see Section 2.3, “SSL Encryption”.
The following options are configurable in the Pacemaker configuration screen:
Choose a technology used for cluster communication. You can choose between
, (sending a message to multiple destinations) or (sending a message to a single destination). By default unicast is used.Whenever communication fails between one or more nodes and the rest of the cluster a “cluster partition” occurs. The nodes of a cluster are split in partitions but are still active. They can only communicate with nodes in the same partition and are unaware of the separated nodes. The cluster partition that has the majority of nodes is defined to have “quorum”.
This configuration option defines what to do with the cluster partition(s) that do not have the quorum. See http://www.suse.com/documentation/sle-ha-12/book_sleha/data/sec_ha_config_basics_global.html, section Option no-quorum-policy for details.
The recommended setting is to choose
. However, is enforced for two-node clusters to ensure that the remaining node continues to operate normally in case the other node fails. For clusters using shared resources, choosing may be used to ensure that these resources continue to be available.“Misbehaving” nodes in a cluster are shut down to prevent it from causing trouble. This mechanism is called STONITH (“Shoot the other node in the head”). STONITH can be configured in a variety of ways, refer to http://www.suse.com/documentation/sle-ha-12/book_sleha/data/cha_ha_fencing.html for details. The following configuration options exist:
STONITH will not be configured when deploying the barclamp. It needs to be configured manually as described in http://www.suse.com/documentation/sle-ha-12/book_sleha/data/cha_ha_fencing.html. For experts only.
Using this option automatically sets up STONITH with data received from the IPMI barclamp. Being able to use this option requires that IPMI is configured for all cluster nodes. This should be done by default, when deploying cloud. To check or change the IPMI deployment, go to
› › › . Also make sure the option is set to on this barclamp.To configure STONITH with the IPMI data, all STONITH devices must support IPMI. Problems with this setup may occur with IPMI implementations that are not strictly standards compliant. In this case it is recommended to set up STONITH with STONITH block devices (SBD).
This option requires to manually set up shared storage and a watchdog on the cluster nodes before applying the proposal. To do so, proceed as follows:
Prepare the shared storage. The path to the shared storage device must be persistent and consistent across all nodes in the cluster. The SBD device must not use host-based RAID, cLVM2, nor reside on a DRBD* instance.
Install the package sbd
on all cluster nodes.
Initialize SBD device with by running the following command. Make
sure to replace /dev/SBD
with the path
to the shared storage device.
sbd -d /dev/SBD create
Refer to http://www.suse.com/documentation/sle-ha-12/book_sleha/data/sec_ha_storage_protect_fencing.html#pro_ha_storage_protect_sbd_create for details.
In
, specify the respective kernel module to be used. Find the most commonly used watchdog drivers in the following table:Hardware | Driver |
---|---|
HP | hpwdt |
Dell, Fujitsu, Lenovo (Intel TCO) | iTCO_wdt |
VM on z/VM on IBM mainframe | vmwatchdog |
Xen VM (DomU) | xen_xdt |
Generic | softdog |
If your hardware is not listed above, either ask your hardware vendor
for the right name or check the following directory for a list of choices:
/lib/modules/KERNEL_VERSION/kernel/drivers/watchdog
.
Alternatively, list the drivers that have been installed with your kernel version:
root #
rpm
-ql kernel-VERSION |grep
watchdog
If the nodes need different watchdog modules, leave the text box empty.
After the shared storage has been set up, specify the path using
the “by-id” notation
(/dev/disk/by-id/DEVICE
).
It is possible to specify multiple paths as a comma-separated list.
Deploying the barclamp will automatically complete the SBD setup on the cluster nodes by starting the SBD daemon and configuring the fencing resource.
All nodes will use the identical configuration. Specify the
to use and enter for the agent.
To get a list of STONITH devices which are supported by the
High Availability Extension, run the following command on an
already installed cluster nodes: stonith -L
. The
list of parameters depends on the respective agent. To view a list
of parameters use the following command:
stonith -t agent -n
All nodes in the cluster use the same
, but can be configured with different parameters. This setup is, for example, required when nodes are in different chassis and therefore need different ILO parameters.
To get a list of STONITH devices which are supported by the
High Availability Extension, run the following command on an
already installed cluster nodes: stonith -L
. The
list of parameters depends on the respective agent. To view a list
of parameters use the following command:
stonith -t agent -n
Use this setting for completely virtualized test installations. This option is not supported.
With STONITH, Pacemaker clusters with two nodes may sometimes hit an issue known as STONITH deathmatch where each node kills the other one, resulting in both nodes rebooting all the time. Another similar issue in Pacemaker clusters is the fencing loop, where a reboot caused by STONITH will not be enough to fix a node and it will be fenced again and again.
This setting can be used to limit these issues. When set to OpenStack Cloud operator. When set to , the services for Pacemaker will always be started on boot. The value is used to have the most appropriate value automatically picked: it will be for two-node clusters (to avoid STONITH deathmatches), and otherwise.
, a node that has not been properly shut down or rebooted will not start the services for Pacemaker on boot. Instead, the node will wait for action from the SUSE
When a node boots but not starts corosync because of this setting,
then the node's status is in the Problem
" (red
dot). To make this node usable again, see
Section J.2, “Re-adding the Node to the Cluster”.
Get notified of cluster node failures via e-mail. If set to
, you need to specify which to use, a prefix for the mails' subject and sender and recipient addresses. Note that the SMTP server must be accessible by the cluster nodes.Set up DRBD for replicated storage on the cluster. This option requires a two-node cluster with a spare hard disk for each node. The disks should have a minimum size of 100 GB. Using DRBD is recommended for making the database and RabbitMQ highly available. For other clusters, set this option to
.The public name is the host name that will be used instead of the generated public name (see Important: Proposal Name) for the public virtual IP of HAProxy. (This is the case when registering public endpoints, for example). Any name specified here needs to be resolved by a name server placed outside of the SUSE OpenStack Cloud network.
The Pacemaker component consists of the following roles. Deploying the
role is optional:Deploy this role on all nodes that should become member of the cluster.
Deploying this role is optional. If deployed, sets up the Hawk
Web interface which lets you monitor the status of the cluster. The
Web interface can be accessed via
https://IP-ADDRESS:7630
.
Note that the GUI on SUSE OpenStack Cloud can only be used to monitor the
cluster status and not to change its configuration.
may be deployed on at least one cluster node. It is recommended to deploy it on all cluster nodes.
Deploy this role on all nodes that should become members of the
Compute Nodes cluster. They will run as Pacemaker remote nodes that are
controlled by the cluster, but do not affect quorum. Instead of the
complete cluster stack, only the pacemaker-remote
component will be installed on this nodes.
After a cluster has been successfully deployed, it is listed under
in the section and can be used for role deployment like a regular node.When using clusters, roles from other barclamps must never be deployed to single nodes that are already part of a cluster. The only exceptions from this rule are the following roles:
cinder-volume
swift-proxy + swift-dispersion
swift-ring-compute
swift-storage
After a role has been deployed on a cluster, its services are managed by the HA software. You must never manually start or stop an HA-managed service (or configure it to start on boot). Services may only be started or stopped by using the cluster management tools Hawk or the crm shell. See http://www.suse.com/documentation/sle-ha-12/book_sleha/data/sec_ha_config_basics_resources.html for more information.
To check whether all cluster resources are running, either use the
Hawk Web interface or run the command crm_mon
-1r
. If it is not the case, clean up the respective
resource with crm
resource
cleanup
RESOURCE , so it
gets respawned.
Also make sure that STONITH correctly works before continuing with the SUSE OpenStack Cloud setup. This is especially important when having chosen a STONITH configuration requiring manual setup. To test if STONITH works, log in to a node on the cluster and run the following command:
pkill -9 corosync
In case STONITH is correctly configured, the node will reboot.
Before testing on a production cluster, plan a maintenance window in case issues should arise.
The very first service that needs to be deployed is the
. The database component is using PostgreSQL and is used by all other components. It must be installed on a Control Node. The Database can be made highly available by deploying it on a cluster.The only attribute you may change is the maximum number of database connections (
). The default value should usually work—only change it for large deployments in case the log files show database connection failures.To make the database highly available, deploy it on a cluster instead of a single Control Node. This also requires shared storage for the cluster that hosts the database data. To achieve this, either set up a cluster with DRBD support (see Section 10.1, “Deploying Pacemaker (Optional, HA Setup Only)”) or use “traditional” shared storage like an NFS share. It is recommended to use a dedicated cluster to deploy the database together with RabbitMQ, since both components require shared storage.
Deploying the database on a cluster makes an additional
section available in the section of the proposal. Configure the in this section. There are two options:This option requires a two-node cluster that has been set up with DRBD. Also specify the
. The suggested value of 50 GB should be sufficient.
Use a shared block device or an NFS mount for shared storage.
Concordantly with the mount command, you need to specify three
attributes: man 8 mount
for details on file system types and
mount options.
To use an NFS share as shared storage for a cluster, export it on the NFS server with the following options:
rw,async,insecure,no_subtree_check,no_root_squash
In case mounting the NFS share on the cluster nodes fails, change the export options and re-apply the proposal. However, before doing so, you need to clean up the respective resources on the cluster nodes as described in http://www.suse.com/documentation/sle-ha-12/book_sleha/data/sec_ha_config_crm.html#sec_ha_manual_config_cleanup.
The shared NFS directory that is used for the PostgreSQL database needs
to be owned by the same user ID and group ID as of the
postgres
user on the HA
database cluster.
To get the IDs, log in to one of the HA database cluster machines and issue the following commands:
id -g postgres getent group postgres | cut -d: -f3
The first command returns the numeric user ID, the second one the numeric group ID. Now log in to the NFS server and change the ownership of the shared NFS directory, for example:
chown UID.GID /exports/cloud/db
Replace UID and GID by the respective numeric values retrieved above.
When re-deploying SUSE OpenStack Cloud and reusing a shared storage hosting database files from a previous installation, the installation may fail due to the old database being used. Always delete the old databasethat is to be used from the shared storage before re-deploying SUSE OpenStack Cloud.
The RabbitMQ messaging system enables services to communicate with the other nodes via Advanced Message Queue Protocol (AMQP). Deploying it is mandatory. RabbitMQ needs to be installed on a Control Node. RabbitMQ can be made highly available by deploying it on a cluster. It is recommended not to change the default values of the proposal's attributes.
Name of the default virtual host to be created and used by the
RabbitMQ server (default_vhost
configuration option
in rabbitmq.config
).
Port the RabbitMQ server listens on (tcp_listeners
configuration option in rabbitmq.config
).
RabbitMQ default user (default_user
configuration
option in rabbitmq.config
).
To make RabbitMQ highly available, deploy it on a cluster instead of a single Control Node. This also requires shared storage for the cluster that hosts the RabbitMQ data. To achieve this, either set up a cluster with DRBD support (see Section 10.1, “Deploying Pacemaker (Optional, HA Setup Only)”) or use “traditional” shared storage like an NFS share. It is recommended to use a dedicated cluster to deploy RabbitMQ together with the database, since both components require shared storage.
Deploying RabbitMQ on a cluster makes an additional
section available in the section of the proposal. Configure the in this section. There are two options:This option requires a two-node cluster that has been set up with DRBD. Also specify the
. The suggested value of 50 GB should be sufficient.Use a shared block device or an NFS mount for shared storage. Concordantly with the mount command, you need to specify three attributes:
(the mount point), the and the .An NFS share for use as a shared storage for a cluster needs to be exported on the NFS server with the following options:
rw,async,insecure,no_subtree_check,no_root_squash
In case mounting the NFS share on the cluster nodes fails, change the export options and re-apply the proposal. Before doing so, however, you need to clean up the respective resources on the cluster nodes as described in http://www.suse.com/documentation/sle-ha-12/book_sleha/data/sec_ha_config_crm.html#sec_ha_manual_config_cleanup.
OpenStack components. It provides authentication and authorization services. needs to be installed on a Control Node. Keystone can be made highly available by deploying it on a cluster. You can configure the following parameters of this barclamp:
is another core component that is used by all other
Set the algorithm used by Keystone to generate the tokens. You can
choose between Fernet
(the default) or
UUID
. Note that for performance and security reasons
it is strongly ecommended to use Fernet
.
Allows to customize the region name that crowbar is going to manage.
Tenant for the users. Do not change the default value of
openstack
.
User name and password for the administrator.
Specify whether a regular user should be created automatically. Not recommended in most scenarios, especially in an LDAP environment.
User name and password for the regular user. Both the regular user and the administrator accounts can be used to log in to the SUSE OpenStack Cloud Dashboard. However only the administrator can manage Keystone users and access.
When sticking with the default value Section 2.3, “SSL Encryption” for background information and Section 9.4.6, “Enabling SSL” for installation instructions. The following additional configuration options will become available when choosing :
, public communication will not be encrypted. Choose to use SSL for encryption. See
When set to true
, self-signed certificates are
automatically generated and copied to the correct locations. This
setting is for testing purposes only and should never be used in
production environments!
Location of the certificate key pair files.
Set this option to true
when using self-signed
certificates to disable certificate checks. This setting is for
testing purposes only and should never be used in production
environments!
Specify the absolute path to the CA certificate here. This option can
only be changed if true
.
By default Keystone uses an SQL database back-end store for authentication. LDAP can be used in addition to the default or as an alternative. Using LDAP requires the Control Node on which Keystone is installed to be able to contact the LDAP server. See Appendix D, The Network Barclamp Template File for instructions on how to adjust the network setup.
To configure LDAP as an alternative to the SQL database back-end store, you need to open the Keystone barclamp
configuration in mode. Search for the section.
Adjust the settings according to your LDAP setup. The default
configuration does not include all attributes that can be
set. You'll find a complete list of options is available in the file
/opt/dell/chef/data_bags/crowbar/bc-template-keystone.schema
on the Administration Server (search for ldap
). There are
three types of attribute values: strings (for example, the value for
url
:"ldap://localhost"
), bool
(for example, the value for use_dumb_member
:
false
) and integer (for example, the value for
page_size
: 0
). Attribute names
and string values always need to be quoted with double quotes; bool and
integer values must not be quoted.
In a production environment, it is recommended to use LDAP over SSL (ldaps), otherwise passwords will be transferred as plain text.
The Hybrid LDAP back-end allows to create a mixed LDAP/SQL setup. This is especially useful when an existing LDAP server should be used to authenticate cloud users. The system and service users (administrators and operators) needed to set up and manage SUSE OpenStack Cloud will be managed in the local SQL database. Assignments of users to projects and roles will also be stored in the local database.
In this scenario the LDAP Server can be read-only for SUSE OpenStack Cloud installation and no Schema modifications are required. Therefore managing LDAP users from within SUSE OpenStack Cloud is not possible and needs to be done using your established tools for LDAP user management. All user that are create with the Keystone command line client or the Horizon Web UI will be stored in the local SQL database.
To configure hybrid authentication, proceed as follows:
Open the Keystone barclamp Figure 10.7, “The Keystone Barclamp: Raw Mode”).
configuration in mode (seeSet the identity and assignment drivers to the hybrid back-end:
"identity": { "driver": "hybrid" }, "assignment": { "driver": "hybrid" }
Adjust the settings according to your LDAP setup in the
section. Since the LDAP back-end is only used to acquire information on users (but not on projects and roles), only the user-related settings matter here. See the following example of settings that may need to be adjusted:"ldap": { "url": "ldap://localhost", "user": "", "password": "", "suffix": "cn=example,cn=com", "user_tree_dn": "cn=example,cn=com", "query_scope": "one", "user_id_attribute": "cn", "user_enabled_emulation_dn": "", "tls_req_cert": "demand", "user_attribute_ignore": "tenant_id,tenants", "user_objectclass": "inetOrgPerson", "user_mail_attribute": "mail", "user_filter": "", "use_tls": false, "user_allow_create": false, "user_pass_attribute": "userPassword", "user_enabled_attribute": "enabled", "user_enabled_default": "True", "page_size": 0, "tls_cacertdir": "", "tls_cacertfile": "", "user_enabled_mask": 0, "user_allow_update": true, "group_allow_update": true, "user_enabled_emulation": false, "user_name_attribute": "cn" "group_ad_nesting": false, "use_pool": true, "pool_size": 10, "pool_retry_max": 3 }
To access the LDAP server anonymously, leave the values for
and empty.Making Keystone highly available requires no special configuration—it is sufficient to deploy it on a cluster.
Ceph adds a redundant block storage service to SUSE OpenStack Cloud. It lets you store persistent devices that can be mounted from instances. It offers high data security by storing the data redundantly on a pool of Storage Node. Therefore Ceph needs to be installed on at least three dedicated nodes. All Ceph nodes need to run SLES 12. For detailed information on how to provide the required repositories, refer to Section 5.2, “Update and Pool Repositories”. If deploying the optional Calamari server for Ceph management and monitoring, an additional node is required.
For more information on the Ceph project, visit http://ceph.com/.
SUSE Enterprise Storage is a robust cluster solution based on Ceph. Refer to https://www.suse.com/documentation/ses-4/ for more information.
The Ceph barclamp has the following configuration options:
Choose whether to only use the first available disk or all available
disks. “Available disks” are all disks currently not used
by the system. Note that one disk (usually
/dev/sda
) of every block storage node is already
used for the operating system and is not available for Ceph.
For data security, stored objects are not only stored once, but redundantly. Specify the number of copies that should be stored for each object with this setting. The number includes the object itself. If you for example want the object plus two copies, specify 3.
Choose whether to encrypt public communication (
) or not ( ). If choosing , you need to specify the locations for the certificate key pair files. Note that both trusted and self-signed certificates are accepted.Calamari is a Web front-end for managing and analyzing the Ceph cluster. Provide administrator credentials (user name, password, e-mail address) in this section. When Ceph has bee deployed you can log in to Calamari with these credentials. Deploying Calamari is optional—leave these text boxes empty when not deploying Calamari.
The Ceph component consists of the following different roles:
We do not recommend sharing one node by more Ceph components at the same
time. For example, running a ceph-mon
service on the same
node as ceph-osd
degrades the performance of all services
hosted on the shared node. This also applies to other services, such as
Calamari or RADOS Gateway.
The virtual block storage service. Install this role on all dedicated Ceph Storage Nodes (at least three).
Cluster monitor daemon for managing the storage map of the Ceph cluster.
needs to be installed on at least three nodes.Sets up the Calamari Web interface which lets you manage the Ceph cluster. Deploying it is optional. The Web interface can be accessed via http://IP-ADDRESS/ (where IP-ADDRESS is the address of the machine where
is deployed on).The HTTP REST gateway for Ceph. Visit https://www.suse.com/documentation/ses-4/book_storage_admin/data/cha_ceph_gw.html for more detailed information.
If you need to set up more RADOS Gateways (and thus create a backup instance in case one RADOS Gateway node fails), set up RADOS Gateway on multiple nodes and put an HTTP load balancer in front of them. You can choose your preferred balancing solution, or use SUSE Linux Enterprise HA extension (refer to https://www.suse.com/documentation/sle-ha-12/).
The metadata server for the CephFS distributed file system. Install this
role on one to three nodes to enable CephFS. A file system named
cephfs
will automatically be created, along with
cephfs_metadata
and cephfs_data
pools.
Refer to https://www.suse.com/documentation/ses-3/book_storage_admin/data/cha_ceph_cephfs.html
for more details.
Never deploy on a node that runs non-Ceph OpenStack components. The only services that may be deployed together on a Ceph node, are , and . However, we recommend running each Ceph service on a dedicated host for performance reasons. All Ceph nodes need to run SLES 12.
Ceph is HA-enabled by design, so there is no need for a special HA setup.
Swift adds an object storage service to SUSE OpenStack Cloud that lets you store single files such as images or snapshots. It offers high data security by storing the data redundantly on a pool of Storage Nodes—therefore Swift needs to be installed on at least two dedicated nodes.
To be able to properly configure Swift it is important to understand how it places the data. Data is always stored redundantly within the hierarchy. The Swift hierarchy in SUSE OpenStack Cloud is formed out of zones, nodes, hard disks, and logical partitions. Zones are physically separated clusters, for example different server rooms each with its own power supply and network segment. A failure of one zone must not affect another zone. The next level in the hierarchy are the individual Swift storage nodes (on which has been deployed) followed by the hard disks. Logical partitions come last.
Swift automatically places three copies of each object on the highest hierarchy level possible. If three zones are available, the each copy of the object will be placed in a different zone. In a one zone setup with more than two nodes, the object copies will each be stored on a different node. In a one zone setup with two nodes, the copies will be distributed on different hard disks. If no other hierarchy element fits, logical partitions are used.
The following attributes can be set to configure Swift:
Allows to enable public access to containers if set to
true
.
If set to true, a copy of the current version is archived, each time an object is updated.
Number of zones (see above). If you do not have different independent
installations of storage nodes, set the number of zones to
1
.
Partition power. The number entered here is used to compute the number of logical partitions to be created in the cluster. The number you enter is used as a power of 2 (2^X).
It is recommended to use a minimum of 100 partitions per disk. To measure the partition power for your setup, do the following: Multiply the number of disks from all Swift nodes with 100 and then round up to the nearest power of two. Keep in mind that the first disk of each node is not used by Swift, but rather for the operating system.
Example: 10 Swift nodes with 5 hard disks each.
Four hard disks on each node are used for Swift, so there
is a total of forty disks. Multiplied by 100 gives 4000. The
nearest power of two, 4096, equals 2^12. So the partition power that
needs to be entered is 12
.
Changing the number of logical partition after Swift has been deployed is not supported. Therefore the value for the partition power should be calculated from the maximum number of partitions this cloud installation is likely going to need at any point in time.
This option sets the number of hours before a logical partition is
considered for relocation. 24
is the recommended
value.
The number of copies generated for each object. Set this value to
3
, the tested and recommended value.
Time (in seconds) after which to start a new replication process.
Shows debugging output in the log files when set to
true
.
Choose whether to encrypt public communication (
) or not ( ). If choosing , you have two choices. You can either or provide the locations for the certificate key pair files. Using self-signed certificates is for testing purposes only and should never be used in production environments!Apart from the general configuration described above, the Swift barclamp lets you also activate and configure
. The features these middleware provide can be used via the Swift command line client only. The Ratelimit and S3 middleware certainly provide for the most interesting features, whereas it is recommended to only enable further middleware for specific use-cases.Provides an S3 compatible API on top of Swift.
Enables to serve container data as a static Web site with an index file and optional file listings. See http://docs.openstack.org/developer/swift/middleware.html#staticweb for details.
This middleware requires to set true
.
Enables to create URLs to provide time limited access to objects. See http://docs.openstack.org/developer/swift/middleware.html#tempurl for details.
Enables to upload files to a container via Web form. See http://docs.openstack.org/developer/swift/middleware.html#formpost for details.
Enables the possibility to extract tar files into a swift account and to delete multiple objects or containers with a single request. See http://docs.openstack.org/developer/swift/middleware.html#module-swift.common.middleware.bulk for details.
Allows to interact with the Swift API via Flash, Java and Silverlight from an external network. See http://docs.openstack.org/developer/swift/middleware.html#module-swift.common.middleware.crossdomain for details.
Translates container and account parts of a domain to path parameters
that the Swift proxy server understands. Can be used to
create short URLs that are easy to remember, for example by rewriting
home.tux.example.com/$ROOT/tux/home/myfile
to home.tux.example.com/myfile
.
See
http://docs.openstack.org/developer/swift/middleware.html#module-swift.common.middleware.domain_remap
for details.
Ratelimit enables you to throttle resources such as requests per minute to provide denial of service protection. See http://docs.openstack.org/developer/swift/middleware.html#module-swift.common.middleware.ratelimit for details.
The Swift component consists of four different roles. Deploying
is optional:The virtual object storage service. Install this role on all dedicated Swift Storage Nodes (at least two), but not on any other node.
Never install the swift-storage service on a node that runs other OpenStack components.
The ring maintains the information about the location of objects, replicas, and devices. It can be compared to an index, that is used by various OpenStack components to look up the physical location of objects. must only be installed on a single node; it is recommended to use a Control Node.
The Swift proxy server takes care of routing requests to Swift. Installing a single instance of
on a Control Node is recommended. The role can be made highly available by deploying it on a cluster.Deploying
is optional. The Swift dispersion tools can be used to test the health of the cluster. It creates a heap of dummy objects (using 1% of the total space available). The state of these objects can be queried using the swift-dispersion-report query. needs to be installed on a Control Node.Swift replicates by design, so there is no need for a special HA setup. Make sure to fulfill the requirements listed in Section 2.6.4.1, “Swift—Avoiding Points of Failure”.
Glance provides discovery, registration, and delivery services for virtual disk images. An image is needed to start an instance—it is its pre-installed root-partition. All images you want to use in your cloud to boot instances from, are provided by Glance. Glance must be deployed onto a Control Node. Glance can be made highly available by deploying it on a cluster.
There are a lot of options to configure Glance. The most important ones are explained below—for a complete reference refer to http://github.com/crowbar/crowbar/wiki/Glance--barclamp.
As of SUSE OpenStack Cloud 7, the Glance API v1 is no longer enabled by default. Instead, Glance API v2 is used by default.
If you need to re-enable API v1 for compatibility reasons:
Switch to the
view of the Glance barclamp.
Search for the enable_v1
entry and set it to
true
:
"enable_v1": true
In new installations, this entry is set to false
by default.
When upgrading from an older version of SUSE OpenStack Cloud it is set to
true
by default.
Apply your changes.
Choose whether to use Swift or Ceph (
) to store the images. If you have deployed neither of these services, the images can alternatively be stored in an image file on the Control Node ( ). If you have deployed Swift or Ceph, it is recommended to use it for Glance as well.If using VMware as a hypervisor, it is recommended to use it for storing images, too (
). This will make starting VMware instances much faster.Depending on the storage back-end, there are additional configuration options available:
Specify the directory to host the image file. The directory specified here can also be an NFS share. See Section 9.4.3, “Mounting NFS Shares on a Node” for more information.
Set the name of the container to use for the images in Swift.
If using a SUSE OpenStack Cloud internal Ceph setup, the user you specify here is created in case it does not exist. If using an external Ceph cluster, specify the user you have set up for Glance (see Section 9.4.4, “Using an Externally Managed Ceph Cluster” for more information).
If using a SUSE OpenStack Cloud internal Ceph setup, the pool you specify here is created in case it does not exist. If using an external Ceph cluster, specify the pool you have set up for Glance (see Section 9.4.4, “Using an Externally Managed Ceph Cluster” for more information).
Name or IP address of the vCenter server.
vCenter login credentials.
A comma-separated list of datastores specified in the format: DATACENTER_NAME:DATASTORE_NAME
Specify an absolute path here.
Choose whether to encrypt public communication (SSL Support: Protocol for configuration details.
) or not ( ). If choosing , refer toEnable and configure image caching in this section. By default, image caching is disabled. Learn more about Glance's caching feature at http://docs.openstack.org/developer/glance/cache.html.
Shows debugging output in the log files when set to
.Glance can be made highly available by deploying it on a cluster. It is also strongly recommended to do so for the image data, too. The recommended way to achieve this is to use Swift or an external Ceph cluster for the image repository. If using a directory on the node instead (file storage back-end), you should set up shared storage on the cluster for it.
Cinder, the successor of Nova Volume, provides volume block storage. It adds persistent storage to an instance that will persist until deleted (contrary to ephemeral volumes that will only persist while the instance is running).
Cinder can provide volume storage by using different back-ends such as local file, one or more local disks, Ceph (RADOS), VMware or network storage solutions from EMC, EqualLogic, Fujitsu or NetApp. Since SUSE OpenStack Cloud 5, Cinder supports using several back-ends simultaneously. It is also possible to deploy the same network storage back-end multiple times and therefore use different installations at the same time.
The attributes that can be set to configure Cinder depend on the back-end. The only general option is SSL Support: Protocol for configuration details).
(seeWhen first opening the Cinder barclamp, the default proposal—
— is already available for configuration. To optionally add a back-end, go to the section and choose a from the drop-down box. Optionally, specify the . This is recommended when deploying the same volume type more than once. Existing back-end configurations (including the default one) can be deleted by clicking the trashcan icon if no longer needed. Note that at least one back-end must be configured.
Choose whether to only use the “Available
disks” are all disks, currently not used by the system. Note
that one disk (usually /dev/sda
) of every block
storage node is already used for the operating system and is not
available for Cinder.
Specify a name for the Cinder volume.
IP address and Port of the ECOM server.
Login credentials for the ECOM server.
VMAX port groups that expose volumes managed by this back-end.
Unique VMAX array serial number.
Unique pool name within a given array.
Name of the FAST Policy to be used. When specified, volumes managed by this back-end are managed as under FAST control.
For more information on the EMC driver refer to the OpenStack documentation at http://docs.openstack.org/liberty/config-reference/content/emc-vmax-driver.html.
EqualLogic drivers are included as a technology preview and are not supported.
Select the protocol used to connect, either
or .IP address and port of the ETERNUS SMI-S Server.
Login credentials for the ETERNUS SMI-S Server.
Storage pool (RAID group) in which the volumes are created. Make sure to have created that RAID group on the server in advance. If a RAID group that does not exist is specified, the RAID group is created by using unused disk drives. The RAID level is automatically determined by the ETERNUS DX Disk storage system.
For information on configuring the Hitachi HUSVM back-end, refer to http://docs.openstack.org/newton/config-reference/block-storage/drivers/hitachi-storage-volume-driver.html.
SUSE OpenStack Cloud can either use “Data ONTAP” in or in . In vFiler will be configured, in vServer will be configured. The can either be set to or . Choose the driver and the protocol your NetApp is licensed for.
The management IP address for the 7-Mode storage controller or the cluster management IP address for the clustered Data ONTAP.
Transport protocol for communicating with the storage controller or clustered Data ONTAP. Supported protocols are HTTP and HTTPS. Choose the protocol your NetApp is licensed for.
The port to use for communication. Port 80 is usually used for HTTP, 443 for HTTPS.
Login credentials.
The vFiler unit to be used for provisioning of OpenStack volumes. This setting is only available in .
Provide a list of comma-separated volumes names to be used for provisioning. This setting is only available when using iSCSI as storage protocol.
A list of accessible physical file systems on an NFS server.
Additional options for mounting NFS exports.
Select OpenStack Cloud. In case you are using an external Ceph cluster (see Section 9.4.4, “Using an Externally Managed Ceph Cluster” for setup instructions), select .
if you have deployed Ceph with SUSEName of the pool used to store the Cinder volumes.
Ceph user name.
Host name or IP address of the vCenter server.
vCenter login credentials.
Provide a comma-separated list of cluster names.
Path to the directory used to store the Cinder volumes.
Absolute path to the vCenter CA certificate.
Default value: false
(the CA truststore is used for verification).
Set this option to true
when using self-signed certificates to disable
certificate checks. This setting is for testing purposes only and must not be used in
production environments!
Absolute path to the file to be used for block storage.
Maximum size of the volume file. Make sure not to overcommit the size, since it will result in data loss.
Specify a name for the Cinder volume.
Using a file for block storage is not recommended for production systems, because of performance and data security reasons.
Lets you manually pick and configure a driver. Only use this option for testing purposes, it is not supported.
The Cinder component consists of two different roles:
The Cinder controller provides the scheduler and the API. Installing
on a Control Node is recommended.The virtual block storage service. It can be installed on a Control Node. However, it is recommended to deploy it on one or more dedicated nodes supplied with sufficient networking capacity to handle the increased in network traffic.
While the
role can be deployed on a cluster, deploying on a cluster is not supported. Therefore it is generally recommended to deploy on several nodes—this ensures the service continues to be available even when a node fails. In addition with Ceph or a network storage solution, such a setup minimizes the potential downtime.If using Ceph or a network storage is not an option, you need to set up a shared storage directory (for example, with NFS), mount it on all cinder volume nodes and use the
back-end with this shared directory. Using is not an option, since local disks cannot be shared.Neutron provides network connectivity between interface devices managed by other OpenStack components (most likely Nova). The service works by enabling users to create their own networks and then attach interfaces to them.
Neutron must be deployed on a Control Node. You first need to choose a core plug-in—
or . Depending on your choice, more configuration options will become available.The
option lets you use an existing VMWare NSX installation. Using this plugin is not a prerequisite for the VMWare vSphere hypervisor support. However, it is needed when wanting to have security groups supported on VMWare compute nodes. For all other scenarios, choose .The only global option that can be configured is SSL Support: Protocol for configuration details.
. Choose whether to encrypt public communication ( ) or not ( ). If choosing , refer toSelect which mechanism driver(s) shall be enabled for the ml2 plugin. It is possible to select more than one driver by holding the Ctrl key while clicking. Choices are:
Supports GRE, VLAN and VLANX networks (to be configured via the . setting).
Supports VLANs only. Requires to specify the . .
Enables Neutron to dynamically adjust the VLAN settings of the ports of an existing Cisco Nexus switch when instances are launched. It also requires . which will automatically be selected. With , must be added. This option also requires to specify the . See Appendix H, Using Cisco Nexus Switches with Neutron for details.
With the default setup, all intra-Compute Node traffic flows through the network Control Node. The same is true for all traffic from floating IPs. In large deployments the network Control Node can therefore quickly become a bottleneck. When this option is set to “talk” to each other. Distributed Virtual Routers (DVR) require the driver and will not work with the driver. HyperV Compute Nodes will not be supported—network traffic for these nodes will be routed via the Control Node on which is deployed. For details on DVR refer to https://wiki.openstack.org/wiki/Neutron/DVR.
, network agents will be installed on all compute nodes. This will de-centralize the network traffic, since Compute Nodes will be able to directlyThis option is only available when having chosen the Ctrl key while clicking.
or the mechanism drivers. Options are , and . It is possible to select more than one driver by holding the
When multiple type drivers are enabled, you need to select the
nova_fixed
network, that will be created when
applying the Neutron proposal. When manually creating provider
networks with the neutron
command, the default can be
overwritten with the --provider:network_type
type
switch. You will also need to
set a . It is
not possible to change this default when manually creating tenant
networks with the neutron
command. The non-default
type driver will only be used as a fallback.
Depending on your choice of the type driver, more configuration options become available.
Having chosen . , you also need to specify the start and end of the tunnel ID range.
The option . requires you to specify the .
Having chosen . , you also need to specify the start and end of the VNI range.
HyperV Compute Nodes do not support
and . If your environment includes a heterogeneous mix of Compute Nodes including HyperV nodes, make sure to select . This can be done in addition to the other drivers.
Neutron must not be deployed with the openvswitch with
gre
plug-in. See Appendix G, VMware vSphere Installation Instructions
for details.
Host name or IP address of the xCAT Management Node.
xCAT login credentials.
List of rdev addresses that should be connected to this vswitch.
IP address of the xCAT management interface.
Net mask of the xCAT management interface.
This plug-in requires to configure access to the VMWare NSX service.
Login credentials for the VMWare NSX server. The user needs to have administrator permissions on the NSX server.
Enter the IP address and the port number (IP-ADDRESS:PORT) of the controller API endpoint. If the port number is omitted, port 443 will be used. You may also enter multiple API endpoints (comma-separated), provided they all belong to the same controller cluster. When multiple API endpoints are specified, the plugin will load balance requests on the various API endpoints.
The UUIDs for the transport zone and the gateway service can be obtained from the NSX server. They will be used when networks are created.
The Neutron component consists of two different roles:
provides the scheduler and the API. It needs to be installed on a Control Node.
This service runs the various agents that manage the network traffic of all the cloud instances. It acts as the DHCP and DNS server and as a gateway for all cloud instances. It is recommend to deploy this role on a dedicated node supplied with sufficient network capacity.
In the Neutron barclamp, you can enable support for the
infoblox IPAM plug-in and configure it. For configuration, the
infoblox
section contains the subsections
grids
and grid_defaults
.
This subsection must contain at least one entry. For each entry, the following parameters are required:
admin_user_name
admin_password
grid_master_host
grid_master_name
data_center_name
You can also add multiple entries to the grids
section. However, the upstream infoblox agent only supports a single grid
currently.
This subsection contains the default settings that are used for each
grid (unless you have configured specific settings within the
grids
section).
For detailed information on all infoblox-related configuration settings, see https://github.com/openstack/networking-infoblox/blob/master/doc/source/installation.rst.
Currently, all configuration options for infoblox are only available in
the raw
mode of the Neutron barclamp. To enable support
for the infoblox IPAM plug-in and configure it, proceed as follows:
the Neutron barclamp proposal or create a new one.
Click
and search for the following section:"use_infoblox": false,
To enable support for the infoblox IPAM plug-in, change this entry to:
"use_infoblox": true,
In the grids
section, configure at least one grid
by replacing the example values for each parameter with real values.
If you need specific settings for a grid, add some of the parameters
from the grid_defaults
section to the respective grid entry
and adjust their values.
Otherwise Crowbar applies the default setting to each grid when you save the barclamp proposal.
Save your changes and apply them.
Neutron can be made highly available by deploying
and on a cluster. While may be deployed on a cluster shared with other services, it is strongly recommended to use a dedicated cluster solely for the role.Nova provides key services for managing the SUSE OpenStack Cloud, sets up the Compute Nodes. SUSE OpenStack Cloud currently supports KVM, Xen and Microsoft Hyper V and VMWare vSphere. The unsupported QEMU option is included to enable test setups with virtualized nodes. The following attributes can be configured for Nova:
Set the “overcommit ratio” for RAM for instances on
the Compute Nodes. A ratio of 1.0
means no
overcommitment. Changing this value is not recommended.
Set the “overcommit ratio” for CPUs for instances on
the Compute Nodes. A ratio of 1.0
means no
overcommitment.
Set the “overcommit ratio” for virtual disks for instances on
the Compute Nodes. A ratio of 1.0
means no
overcommitment.
Amount of reserved host memory that is not used for allocating VMs by Nova Compute.
Allows to move KVM and Xen instances to a different Compute Node running the same hypervisor (cross hypervisor migrations are not supported). Useful when a Compute Node needs to be shut down or rebooted for maintenance or when the load of the Compute Node is very high. Instances can be moved while running (Live Migration).
Enabling the libvirt migration option will open a TCP port on the Compute Nodes that allows access to all instances from all machines in the admin network. Ensure that only authorized machines have access to the admin network when enabling this option.
Sets up a directory /var/lib/nova/instances
on the
Control Node on which is
running. This directory is exported via NFS to all compute nodes and
will host a copy of the root disk of all Xen
instances. This setup is required for live migration of Xen
instances (but not for KVM) and is used to provide central
handling of instance data. Enabling this option is only recommended if
Xen live migration is required—otherwise it should be disabled.
Setting up shared storage in a SUSE OpenStack Cloud where instances are running will result in connection losses to all running instances. It is strongly recommended to set up shared storage when deploying SUSE OpenStack Cloud. If it needs to be done at a later stage, make sure to shut down all instances prior to the change.
Kernel SamePage Merging (KSM) is a Linux Kernel feature which merges identical memory pages from multiple running processes into one memory region. Enabling it optimizes memory usage on the Compute Nodes when using the KVM hypervisor at the cost of slightly increasing CPU usage.
IP address of the xCAT management interface.
xCAT login credentials.
Name of the disk pool for ephemeral disks.
Choose disk pool type for ephemeral disks.
z/VM host managed by xCAT Management Node.
User profile to be used for creating a z/VM userid.
Default zFCP SCSI disk pool.
Name of the xCAT Management Node.
Public SSH key of the xCAT Management Node.
Setting up VMware support is described in a separate section. See Appendix G, VMware vSphere Installation Instructions.
Choose whether to encrypt public communication (SSL Support: Protocol for configuration details.
) or not ( ). If choosing ,refer to
Change the default VNC keymap for instances. By default,
en-us
is used. Enter the value in lowercase,
either as a two character code (such as de
or
jp
) or, as a five character code such
as de-ch
or en-uk
, if applicable.
After having started an instance you can display its VNC console in the OpenStack Dashboard (Horizon) via the browser using the noVNC implementation. By default this connection is not encrypted and can potentially be eavesdropped.
Enable encrypted communication for noVNC by choosing
and providing the locations for the certificate key pair files.Shows debugging output in the log files when set to
.You can pass custom vendor data to all VMs via Nova's metadata server. For example, information about a custom SMT server can be used by the SUSE guest images to automatically configure the repositories for the guest.
To pass custom vendor data, switch to the
view of the Nova barclamp.Search for the following section:
"metadata": { "vendordata": { "json": "{}" } }
As value of the json
entry, enter valid JSON data. For
example:
"metadata": { "vendordata": { "json": "{\"CUSTOM_KEY\": \"CUSTOM_VALUE\"}" } }
The string needs to be escaped because the barclamp file is in JSON format, too.
Use the following command to access the custom vendor data from inside a VM:
curl -s http://METADATA_SERVER/openstack/latest/vendor_data.json
The IP address of the metadata server is always the same from within a VM. For more details, see https://www.suse.com/communities/blog/vms-get-access-metadata-neutron/.
The Nova component consists of eight different roles:
Distributing and scheduling the instances is managed by the
. It also provides networking and messaging services. needs to be installed on a Control Node.
Provides the hypervisors (KVM, QEMU, VMware vSphere,
Xen, and z/VM) and tools needed to manage the instances. Only one
hypervisor can be deployed on a single compute node. To use
different hypervisors in your cloud, deploy different hypervisors
to different Compute Nodes. A nova-compute-*
role needs to be installed on every Compute Node. However, not all
hypervisors need to be deployed.
Each image that will be made available in SUSE OpenStack Cloud to start an
instance is bound to a hypervisor. Each hypervisor can be deployed
on multiple Compute Nodes (except for the VMWare vSphere role, see
below). In a multi-hypervisor deployment you should make sure to
deploy the nova-compute-*
roles in a way, that
enough compute power is available for each hypervisor.
Existing nova-compute-*
nodes can be changed
in a production SUSE OpenStack Cloud without service interruption. You need to
“evacuate”
the node, re-assign a new nova-compute
role
via the Nova barclamp and the
change. can only be
deployed on a single node.
VMware vSphere is not supported “natively” by SUSE OpenStack Cloud—it rather delegates requests to an existing vCenter. It requires preparations at the vCenter and post install adjustments of the Compute Node. See Appendix G, VMware vSphere Installation Instructions for instructions. can only be deployed on a single Compute Node.
Making
highly available requires no special configuration—it is sufficient to deploy it on a cluster.To enable High Availability for Compute Nodes, deploy the following roles to one or more clusters with remote nodes:
nova-compute-kvm
nova-compute-qemu
nova-compute-xen
The cluster to which you deploy the roles above can be completely
independent of the one to which the role
nova-controller
is deployed.
It is recommended to use shared storage for the
/var/lib/nova/instances
directory. If an external NFS
server is used, enable the following option in the Nova barclamp
proposal: .
The last component that needs to be deployed is Horizon, the OpenStack Dashboard. It provides a Web interface for users to start and stop instances and for administrators to manage users, groups, roles, etc. Horizon should be installed on a Control Node. To make Horizon highly available, deploy it on a cluster.
The following attributes can be configured:
Timeout (in minutes) after which a user is been logged out automatically. The default value is set to four hours (240 minutes).
Every Horizon session requires a valid Keystone token. These tokens also have a lifetime of for hours (14400 seconds). Setting the Horizon session timeout to a value larger than 240 will therefore have no effect, and you will receive a warning when applying the barclamp.
To successfully apply a timeout larger than four hours, you first need
to adjust the Keystone token expiration accordingly. To do so, open the
Keystone barclamp in token_expiration
. Note that the value has to
be provided in seconds. When the change is
successfully applied, you can adjust the Horizon session timeout (in
minutes). Note that extending the Keystone token
expiration may cause scalability issues in large and very busy SUSE OpenStack Cloud
installations.
Specify a regular expression with which to check the password. The
default expression (.{8,}
) tests for a minimum length
of 8 characters. The string you enter is interpreted as a Python regular
expression (see
http://docs.python.org/2.7/library/re.html#module-re
for a reference).
Error message that will be displayed in case the password validation fails.
Choose whether to encrypt public communication (
) or not ( ). If choosing , you have two choices. You can either or provide the locations for the certificate key pair files and,—optionally— the certificate chain file. Using self-signed certificates is for testing purposes only and should never be used in production environments!Making Horizon highly available requires no special configuration—it is sufficient to deploy it on a cluster.
Heat is a template-based orchestration engine that enables you to, for example, start workloads requiring multiple servers or to automatically restart instances if needed. It also brings auto-scaling to SUSE OpenStack Cloud by automatically starting additional instances if certain criteria are met. For more information about Heat refer to the OpenStack documentation at http://docs.openstack.org/developer/heat/.
Heat should be deployed on a Control Node. To make Heat highly available, deploy it on a cluster.
The following attributes can be configured for Heat:
Shows debugging output in the log files when set to
.Choose whether to encrypt public communication (SSL Support: Protocol for configuration details.
) or not ( ). If choosing ,refer to
Heat uses Keystone Trusts to delegate a subset of user roles to the
Heat engine for deferred operations (see Steve
Hardy's blog for details ). It can either delegate all user
roles or only those specified in the
trusts_delegated_roles
setting. Consequently, all roles
listed in trusts_delegated_roles
need to be assigned to
a user, otherwise the user will not be able to use Heat.
The recommended setting for trusts_delegated_roles
is
Member
, since this is the default role most users are
likely to have. This is also the default setting when installing SUSE OpenStack Cloud
from scratch.
On installations where this setting is introduced through an upgrade,
trusts_delegated_roles
will be set to
heat_stack_owner
. This is a conservative choice to
prevent breakage in situations where unprivileged users may already have
been assigned the heat_stack_owner
role to enable them
to use Heat but lack the Member
role. As long as you can
ensure that all users who have the heat_stack_owner
role
also have the Member
role, it is both safe and
recommended to change trusts_delegated_roles to Member
,
since the latter is the default role assigned by our hybrid LDAP back-end
among others.
To view or change the trusts_delegated_role setting you need to open the
Heat barclamp and click trusts_delegated_roles
setting and modify the list
of roles as desired.
An empty value for trusts_delegated_roles
will delegate
all of user roles to Heat. This may create a security
risk for users who are assigned privileged roles, such as
admin
, because these privileged roles will also be
delegated to the Heat engine when these users create Heat stacks.
Making Heat highly available requires no special configuration—it is sufficient to deploy it on a cluster.
Ceilometer collects CPU and networking data from SUSE OpenStack Cloud. This data can be used by a billing system to enable customer billing. Deploying Ceilometer is optional.
For more information about Ceilometer refer to the OpenStack documentation at http://docs.openstack.org/developer/ceilometer/.
As of SUSE OpenStack Cloud 7 data measuring is only supported for KVM, Xen and Windows instances. Other hypervisors and SUSE OpenStack Cloud features such as object or block storage will not be measured.
The following attributes can be configured for Ceilometer:
Specify an interval in seconds after which Ceilometer performs an update of the specified meter.
Set the interval after which to check whether to raise an alarm because a threshold has been exceeded. For performance reasons, do not set a value lower than the default (60s).
Ceilometer collects a large amount of data, which is written to a database. In a production system it is recommended to use a separate database for Ceilometer rather than the standard database that is also used by the other SUSE OpenStack Cloud components. MongoDB is optimized to write a lot of data. As of SUSE OpenStack Cloud 7, MongoDB is only included as a technology preview and not supported.
Specify how long to keep the data. -1 means that samples are kept in the database forever.
Shows debugging output in the log files when set to
.The Ceilometer component consists of five different roles:
The Ceilometer API server role. This role needs to be deployed on a Control Node. Ceilometer collects approximately 200 bytes of data per hour and instance. Unless you have a very huge number of instances, there is no need to install it on a dedicated node.
The polling agent listens to the message bus to collect data. It needs to be deployed on a Control Node. It can be deployed on the same node as
.The compute agents collect data from the compute nodes. They need to be deployed on all KVM and Xen compute nodes in your cloud (other hypervisors are currently not supported).
An agent collecting data from the Swift nodes. This role needs to be deployed on the same node as swift-proxy.
Making Ceilometer highly available requires no special configuration—it is sufficient to deploy the roles
and on a cluster. The cluster needs to consist of an odd number of nodes, otherwise the Ceilometer deployment will fail.Manila provides coordinated access to shared or distributed file systems, similar to what Cinder does for block storage. These file systems can be shared between instances in SUSE OpenStack Cloud.
Manila uses different back-ends. As of SUSE OpenStack Cloud 7 currently supported back-ends include , , and . Two more back-end options, and are available for testing purposes and are not supported.
Manila uses some CephFS features that are currently not supported by the SUSE Linux Enterprise 12 SP2 CephFS kernel client:
RADOS namespaces
MDS path restrictions
Quotas
As a result, to access CephFS shares provisioned by Manila, you must use ceph-fuse. For details, see http://docs.openstack.org/developer/manila/devref/cephfs_native_driver.html.
When first opening the Manila barclamp, the default proposal
is already available for configuration. To replace it, first delete it by clicking the trashcan icon and then choose a different back-end in the section . Select a and—optionally—provide a . Activate the back-end with . Note that at least one back-end must be configured.The attributes that can be set to configure Cinder depend on the back-end:
The generic driver is included as a technology preview and is not supported.
Provide the name of the Enterprise Virtual Server that the selected back-end is assigned to.
IP address for mounting shares.
Provide a file-system name for creating shares.
IP address of the HNAS management interface for communication between Manila controller and HNAS.
HNAS username Base64 String required to perform tasks like creating file-systems and network interfaces.
HNAS user password. Required only if private key is not provided.
RSA/DSA private key necessary for connecting to HNAS. Required only if password is not provided.
Time in seconds to wait before aborting stalled HNAS jobs.
Host name of the Virtual Storage Server.
The name or IP address for the storage controller or the cluster.
The port to use for communication. Port 80 is usually used for HTTP, 443 for HTTPS.
Login credentials.
Transport protocol for communicating with the storage controller or cluster. Supported protocols are HTTP and HTTPS. Choose the protocol your NetApp is licensed for.
Set to true
to use Ceph deployed with Crowbar.
Lets you manually pick and configure a driver. Only use this option for testing purposes, it is not supported.
The Manila component consists of two different roles:
The Manila server provides the scheduler and the API. Installing it on a Control Node is recommended.
The shared storage service. It can be installed on a Control Node, but it is recommended to deploy it on one or more dedicated nodes supplied with sufficient disk space and networking capacity, since it will generate a lot of network traffic.
While the
role can be deployed on a cluster, deploying on a cluster is not supported. Therefore it is generally recommended to deploy on several nodes—this ensures the service continues to be available even when a node fails.Trove is a Database-as-a-Service for SUSE OpenStack Cloud. It provides database instances which can be used by all instances. With Trove being deployed, SUSE OpenStack Cloud users no longer need to deploy and maintain their own database applications. For more information about Trove; refer to the OpenStack documentation at http://docs.openstack.org/developer/trove/.
Trove is only included as a technology preview and not supported.
Trove should be deployed on a dedicated Control Node.
The following attributes can be configured for Trove:
When enabled, Trove will use a Cinder volume to store the data.
Increases the amount of information that is written to the log files when set to
.Shows debugging output in the log files when set to
.An HA Setup for Trove is currently not supported.
Tempest is an integration test suite for SUSE OpenStack Cloud written in Python. It contains multiple integration tests for validating your SUSE OpenStack Cloud deployment. For more information about Tempest refer to the OpenStack documentation at http://docs.openstack.org/developer/tempest/.
Tempest is only included as a technology preview and not supported.
Tempest may be used for testing whether the intended setup will run without problems. It should not be used in a production environment.
Tempest should be deployed on a Control Node.
The following attributes can be configured for Tempest:
Credentials for a regular user. If the user does not exist, it will be created.
Tenant to be used by Tempest. If it does not exist, it will be created. It is safe to stick with the default value.
Credentials for an admin user. If the user does not exist, it will be created.
To run tests with Tempest, log in to the Control Node on which
Tempest was deployed. Change into the directory
/var/lib/openstack-tempest-test
. To get an overview
of available commands, run:
./run_tempest.sh --help
To serially invoke a subset of all tests (“the gating
smoketests”) to help validate the working functionality of your
local cloud instance, run the following command. It will save the output
to a log file
tempest_CURRENT_DATE.log
.
./run_tempest.sh --no-virtual-env -serial --smoke 2>&1 \ | tee "tempest_$(date +%Y-%m-%d_%H%M%S).log"
Tempest cannot be made highly available.
Magnum is an OpenStack project which offers container orchestration engines for deploying and managing containers as first class resources in OpenStack.
For more information about Magnum, see the OpenStack documentation at http://docs.openstack.org/developer/magnum/.
For information on how to deploy a Kubernetes cluster (either from command line or from the Horizon Dashboard), see the Supplement to Administrator Guide and End User Guide. It is available from https://www.suse.com/documentation/cloud.
The following
can be configured for Magnum:Increases the amount of information that is written to the log files when set to
.Shows debugging output in the log files when set to
.Domain name to use for creating trustee for bays.
To store certificates, either use the OpenStack service, a local directory ( ), or the .
The Magnum barclamp consists of the following roles: Section 10.17.1, “HA Setup for Magnum”. When deploying the role onto a Control Node, additional RAM is required for the Magnum server. It is recommended to only deploy the role to a Control Node that has 16 GB RAM.
. It can either be deployed on a Control Node or on a cluster—seeMaking Magnum highly available requires no special configuration. It is sufficient to deploy it on a cluster.
Barbican is a component designed for storing secrets in a secure and standardized manner protected by Keystone authentication. Secrets include SSL certificates and passwords used by various OpenStack components.
Barbican settings can be configured in Raw
mode only. To do this, open the Barbican barclamp configuration in mode.
When configuring Barbican, pay particular attention to the following settings:
bind_host
Bind host for the Barbican API service
bind_port
Bind port for the Barbican API service
processes
Number of API processes to run in Apache
ssl
Enable or disable SSL
threads
Number of API worker threads
debug
Enable or disable debug logging
enable_keystone_listener
Enable or disable the Keystone listener services
kek
An encryption key (fixed-length 32-byte Base64-encoded value) for Barbican's simple_crypto
plugin. If left unspecified, the key will be generated automatically.
If you plan to restore and use the existing Barbican database after a full reinstall (including a complete wipe of the Crowbar node), make sure to save the specified encryption key beforehand. You will need to provide it after the full reinstall in order to access the data in the restored Barbican database.
To make Barbican highly available, assign the
role to the Controller Cluster.Sahara provides users with simple means to provision data processing frameworks (such as Hadoop, Spark, and Storm) on OpenStack. This is accomplished by specifying configuration parameters such as the framework version, cluster topology, node hardware details, etc.
Set to true
to increase the amount of information written to the log files.
Making Sahara highly available requires no special configuration. It is sufficient to deploy it on a cluster.
With a successful deployment of the OpenStack Dashboard, the SUSE OpenStack Cloud installation is finished. To be able to test your setup by starting an instance one last step remains to be done—uploading an image to the Glance component. Refer to the Supplement to Administrator Guide and End User Guide, chapter Manage images for instructions. Images for SUSE OpenStack Cloud can be built in SUSE Studio. Refer to the Supplement to Administrator Guide and End User Guide, section Building Images with SUSE Studio.
Now you can hand over to the cloud administrator to set up users, roles,
flavors, etc.—refer to the Administrator Guide for details. The
default credentials for the OpenStack Dashboard are user name
admin
and password crowbar
.