Contents
The OpenStack nodes represent the actual cloud infrastructure. Node
installation and service deployment is done automatically from the
Administration Server. Before deploying the OpenStack services, you need to install
SUSE Linux Enterprise Server on every node. In order to do so, each node needs to be PXE booted
using the tftp
server from the
Administration Server. Afterwards you can allocate the nodes and trigger the operating
system installation. There are three different types of nodes:
Controller Node: The central management node interacting with all other nodes. |
Compute Nodes: The nodes on which the instances are started. |
Storage Nodes: Nodes providing object or block storage. |
Make a note of the MAC address and the purpose of each node (for example, controller, storage Ceph, storage Swift, compute). This will make deploying the OpenStack services a lot easier and less error-prone, since it allows you to assign meaningful names (aliases) to the nodes, which are otherwise listed with the MAC address by default.
Make sure PXE-booting (booting from the network) is enabled and configured as the primary boot-option for each node. The nodes will boot twice from the network during the allocation and installation phase.
All nodes are installed using AutoYaST with the same configuration located
at
/opt/dell/chef/cookbooks/provisioner/templates/default/autoyast.xml.erb
. If
this configuration does not match your needs (for example if you need
special third party drivers) you need to make adjustments to this
file. An AutoYaST manual can be found at http://www.suse.com/documentation/sles11/book_autoyast/data/book_autoyast.html. Having
change the AutoYaST config file, you need to re-upload it to Chef, using
the following command:
knife cookbook upload -o /opt/dell/chef/cookbooks/ provisioner
root
Login
By default, the root
account on the nodes has no password
assigned, so a direct root
login is not possible. Logging in on
the nodes as root
is only possible via SSH public keys (for
example, from the Administration Server).
If you want to allow direct root
login, you can set a password
that will be used for the root
account on all OpenStack nodes
before deploying the nodes. This must be done before the nodes are
deployed; setting a root
password at a later stage is not
possible.
Setting a root
Password for the OpenStack Nodes
Create an md5-hashed root
-password, for example by using
mkpasswd --method=md5
(mkpasswd is provided by the package whois
, which is not installed by
default).
Open a browser and point it to the Crowbar Web interface available at
port 3000
of the Administration Server, for example http://192.168.124.10:3000/. Log in as user crowbar
. The password defaults to
crowbar
, if you have not changed it during the
installation.
Open the Barclamp menu by clicking
+ . Click the Barclamp entry and the proposal.Click
to edit the configuration file.Add the following line within the
section of the file:"root_password_hash": "HASHED_PASSWORD
"
replacing "HASHED_PASSWORD
" with the
password you generated in the first step.
To install a node, you need to PXE boot it first. It will be booted with an image that allows the Administration Server to discover the node and make it available for installation. Once you have allocated the node, it will PXE boot again and the automatic installation will start.
PXE-boot all nodes you want to deploy. Although it is possible to allocate nodes one-by-one, doing this in bulk-mode is recommended, because it is much faster. The nodes will boot into the “SLEShammer” image, which performs initial hardware discovery.
Open a browser and point it to the Crowbar Web interface available at
port 3000
of the Administration Server, for example http://192.168.124.10:3000/. Log in as user crowbar
. The password defaults to
crowbar
, if you have not changed it.
Click
+ to open the .
Each node that has successfully booted will be listed as being in state
Discovered
, indicated by a yellow bullet. The nodes
will be listed with their MAC address as a name. Wait until all nodes
are listed as being Discovered
before proceeding.
Although this step is optional, it is recommended to properly group your nodes at this stage, since it allows you to clearly arrange all nodes. Grouping the nodes by role would be one option, for example control, compute, object storage (Swift), and block storage (Ceph) .
Enter the name of a new group into the
input field and click .Drag and drop a node onto the title of the newly created group. Repeat this step for each node you would like to put into the group.
To allocate the nodes click on
+ . If you prefer to allocate the nodes one-by-one, click a node's name followed by a click on instead.Provide a meaningful
and a for each node and check the box. The entries for and are currently not used.![]() | Alias Names |
---|---|
Providing an alias name will change the default node names (MAC
address) to the name you provided, making it easier to identify the
node. Furthermore, this alias will also be used as a DNS
|
Once you have filled in the data for all nodes, click
. The nodes will reboot and commence the AutoYaST-based SUSE Linux Enterprise Server installation via a second PXE boot. Click + to return to the
Nodes that are being installed are listed with the status
Installing
(yellow/green bullet). Once the
installation of a node has finished, it is listed as being
Ready
, indicated by a green bullet. Wait until all
nodes are listed as being Ready
before proceeding.
The following lists some optional configuration steps like configuring node access and SSL-enablement. You may entirely skip the following steps or perform the steps necessary for accessing the nodes or the SSL enablement at any later stage.
If you plan to host the Glance Image Repository on a separate volume (recommended) or partition, you need to prepare the Controller Node before deploying the Glance service.
Log in to the Controller Node as root
via SSH from the Administration Server (see
Section 6.1.2, “OpenStack Node Deployment” for detailed
instructions). Set up the volume or format the partition and mount it to
/var/lib/glance/images
(if you do not use YaST for
this tasks, you need to create the directory prior to mounting).
By default, the root
account on the nodes has no password assigned,
so root
login is only possible via SSH. The default setup allows
to execute the ssh command as user root
from the
Administration Server (see How can I log in to a node as root
?). In
order to be able to execute the ssh command as a
different user, you need to add this user's public SSH keys to
root
's authorized_keys
file on all
nodes. Proceed as follows:
Procedure 4.1. Copying SSH Keys to all Nodes
Log in to the Crowbar Web interface available at port
3000
of the Administration Server, for example http://192.168.124.10:3000/ (username and default password:
crowbar
).
Open the Barclamp menu by clicking
+ . Click the Barclamp entry and the proposal.Copy and paste the SSH keys into the
input field. Each key needs to be placed on a new line.Click
to deploy the keys and save your changes to the proposal.In order to enable SSL to encrypt communication within the cloud (see Section 2.4, “SSL Encryption” for details), the respective certificates need to be available on the nodes.
The certificate file and the key file need to be copied to the Controller Node, into the following locations:
/etc/apache2/ssl.crt/
/etc/apache2/ssl.key/
All nodes that have been allocated can be decommissioned or re-installed. Click a node's name in the
and then click . The following options are available:Deletes a node from the pool. If you want to re-use this node again, it needs to be reallocated and re-installed from scratch.
Temporarily removes the node from the pool of nodes. Once you reallocate the node it will take its former role. Useful for adding additional machines in times of high load or fir decommissioning machines in times of low load.
Triggers a reinstallation. The machine stays allocated.
![]() | Editing Nodes in a Production System |
---|---|
When deallocating nodes that provide essential services, the complete cloud will become unusable. While it is uncritical to disable single storage nodes (provided you have not disabled redundancy) or single compute nodes, disabling the Controller Node will “kill” the complete cloud. You should also not disable nodes providing Ceph monitoring services or the nodes providing swift ring and proxy services. |