Chapter 4. Installing the OpenStack Nodes

Contents

4.1. Preparations
4.2. Node Installation
4.3. Post-Installation Configuration
4.4. Editing Allocated Nodes

The OpenStack nodes represent the actual cloud infrastructure. Node installation and service deployment is done automatically from the Administration Server. Before deploying the OpenStack services, you need to install SUSE Linux Enterprise Server on every node. In order to do so, each node needs to be PXE booted using the tftp server from the Administration Server. Afterwards you can allocate the nodes and trigger the operating system installation. There are three different types of nodes:

Controller Node: The central management node interacting with all other nodes.
Compute Nodes: The nodes on which the instances are started.
Storage Nodes: Nodes providing object or block storage.

4.1. Preparations

Meaningful Node names

Make a note of the MAC address and the purpose of each node (for example, controller, storage Ceph, storage Swift, compute). This will make deploying the OpenStack services a lot easier and less error-prone, since it allows you to assign meaningful names (aliases) to the nodes, which are otherwise listed with the MAC address by default.

BIOS Boot Settings

Make sure PXE-booting (booting from the network) is enabled and configured as the primary boot-option for each node. The nodes will boot twice from the network during the allocation and installation phase.

Custom Node Configuration

All nodes are installed using AutoYaST with the same configuration located at /opt/dell/chef/cookbooks/provisioner/templates/default/autoyast.xml.erb. If this configuration does not match your needs (for example if you need special third party drivers) you need to make adjustments to this file. An AutoYaST manual can be found at http://www.suse.com/documentation/sles11/book_autoyast/data/book_autoyast.html. Having change the AutoYaST config file, you need to re-upload it to Chef, using the following command:

knife cookbook upload -o /opt/dell/chef/cookbooks/ provisioner
Direct root Login

By default, the root account on the nodes has no password assigned, so a direct root login is not possible. Logging in on the nodes as root is only possible via SSH public keys (for example, from the Administration Server).

If you want to allow direct root login, you can set a password that will be used for the root account on all OpenStack nodes before deploying the nodes. This must be done before the nodes are deployed; setting a root password at a later stage is not possible.

Setting a root Password for the OpenStack Nodes

  1. Create an md5-hashed root-password, for example by using mkpasswd --method=md5 (mkpasswd is provided by the package whois, which is not installed by default).

  2. Open a browser and point it to the Crowbar Web interface available at port 3000 of the Administration Server, for example http://192.168.124.10:3000/. Log in as user crowbar. The password defaults to crowbar, if you have not changed it during the installation.

  3. Open the Barclamp menu by clicking Barclamps+All Barclamps. Click the Provisioner Barclamp entry and Edit the Default proposal.

  4. Click Raw to edit the configuration file.

  5. Add the following line within the Provisioner section of the file:

    "root_password_hash": "HASHED_PASSWORD"

    replacing "HASHED_PASSWORD" with the password you generated in the first step.

4.2. Node Installation

To install a node, you need to PXE boot it first. It will be booted with an image that allows the Administration Server to discover the node and make it available for installation. Once you have allocated the node, it will PXE boot again and the automatic installation will start.

  1. PXE-boot all nodes you want to deploy. Although it is possible to allocate nodes one-by-one, doing this in bulk-mode is recommended, because it is much faster. The nodes will boot into the SLEShammer image, which performs initial hardware discovery.

  2. Open a browser and point it to the Crowbar Web interface available at port 3000 of the Administration Server, for example http://192.168.124.10:3000/. Log in as user crowbar. The password defaults to crowbar, if you have not changed it.

    Click Nodes+Dashboard to open the Node Dashboard.

  3. Each node that has successfully booted will be listed as being in state Discovered, indicated by a yellow bullet. The nodes will be listed with their MAC address as a name. Wait until all nodes are listed as being Discovered before proceeding.

  4. Although this step is optional, it is recommended to properly group your nodes at this stage, since it allows you to clearly arrange all nodes. Grouping the nodes by role would be one option, for example control, compute, object storage (Swift), and block storage (Ceph) .

    1. Enter the name of a new group into the New Group input field and click Add Group.

    2. Drag and drop a node onto the title of the newly created group. Repeat this step for each node you would like to put into the group.

  5. To allocate the nodes click on Nodes+Bulk Edit. If you prefer to allocate the nodes one-by-one, click a node's name followed by a click on Edit instead.

  6. Provide a meaningful Alias and a Description for each node and check the Allocate box. The entries for BIOS and RAID are currently not used.

    [Tip]Alias Names

    Providing an alias name will change the default node names (MAC address) to the name you provided, making it easier to identify the node. Furthermore, this alias will also be used as a DNS CNAME for the node in the admin network. As a result, you will be able to access the node via this alias when, for example, logging in via SSH.

  7. Once you have filled in the data for all nodes, click Save. The nodes will reboot and commence the AutoYaST-based SUSE Linux Enterprise Server installation via a second PXE boot. Click Nodes+Dashboard to return to the Node Dashboard.

  8. Nodes that are being installed are listed with the status Installing (yellow/green bullet). Once the installation of a node has finished, it is listed as being Ready, indicated by a green bullet. Wait until all nodes are listed as being Ready before proceeding.

4.3. Post-Installation Configuration

The following lists some optional configuration steps like configuring node access and SSL-enablement. You may entirely skip the following steps or perform the steps necessary for accessing the nodes or the SSL enablement at any later stage.

4.3.1.  Providing a Volume or Separate Partition for the Glance Image Repository

If you plan to host the Glance Image Repository on a separate volume (recommended) or partition, you need to prepare the Controller Node before deploying the Glance service.

Log in to the Controller Node as root via SSH from the Administration Server (see Section 6.1.2, “OpenStack Node Deployment” for detailed instructions). Set up the volume or format the partition and mount it to /var/lib/glance/images (if you do not use YaST for this tasks, you need to create the directory prior to mounting).

4.3.2. Accessing the Nodes

By default, the root account on the nodes has no password assigned, so root login is only possible via SSH. The default setup allows to execute the ssh command as user root from the Administration Server (see How can I log in to a node as root?). In order to be able to execute the ssh command as a different user, you need to add this user's public SSH keys to root's authorized_keys file on all nodes. Proceed as follows:

Procedure 4.1. Copying SSH Keys to all Nodes

  1. Log in to the Crowbar Web interface available at port 3000 of the Administration Server, for example http://192.168.124.10:3000/ (username and default password: crowbar).

  2. Open the Barclamp menu by clicking Barclamps+All Barclamps. Click the Provisioner Barclamp entry and Edit the Default proposal.

  3. Copy and paste the SSH keys into the Additional SSH Keys input field. Each key needs to be placed on a new line.

  4. Click Apply to deploy the keys and save your changes to the proposal.

4.3.3. Enabling SSL

In order to enable SSL to encrypt communication within the cloud (see Section 2.4, “SSL Encryption” for details), the respective certificates need to be available on the nodes.

The certificate file and the key file need to be copied to the Controller Node, into the following locations:

SSL Certificate File

/etc/apache2/ssl.crt/

SSL Key File

/etc/apache2/ssl.key/

4.4. Editing Allocated Nodes

All nodes that have been allocated can be decommissioned or re-installed. Click a node's name in the Node Dashboard and then click Edit. The following options are available:

Forget

Deletes a node from the pool. If you want to re-use this node again, it needs to be reallocated and re-installed from scratch.

Deallocate

Temporarily removes the node from the pool of nodes. Once you reallocate the node it will take its former role. Useful for adding additional machines in times of high load or fir decommissioning machines in times of low load.

Reinstall

Triggers a reinstallation. The machine stays allocated.

[Warning]Editing Nodes in a Production System

When deallocating nodes that provide essential services, the complete cloud will become unusable. While it is uncritical to disable single storage nodes (provided you have not disabled redundancy) or single compute nodes, disabling the Controller Node will kill the complete cloud. You should also not disable nodes providing Ceph monitoring services or the nodes providing swift ring and proxy services.


SUSE Cloud Deployment Guide 1.0