The Crowbar Web interface runs on the Administration Server. It provides an
overview of the most important deployment details in your cloud. This includes a
view on the nodes and which roles are deployed on which nodes, and on the
barclamp proposals that can be edited and deployed. In addition, the
Crowbar Web interface shows details about the networks and switches in your
cloud. It also provides graphical access to tools you can use to manage
your repositories, back up or restore the Administration Server, export the
Chef configuration, or generate a supportconfig
TAR
archive with the most important log files.
You can access the Crowbar API documentation from the following static page:
http://CROWBAR_SERVER/apidoc
.
The documentation contains information about the crowbar API endpoints and its parameters, including response examples, possible errors (and their HTTP response codes), parameter validations and required headers.
The Crowbar Web interface uses the HTTP protocol and port
80
.
On any machine, start a Web browser and make sure that JavaScript and cookies are enabled.
As URL, enter the IP address of the Administration Server, for example:
http://192.168.124.10/
Log in as user
crowbar
. If you have not changed
the password, it is crowbar
by default.
After being logged in to the Crowbar Web interface, select
› .Select the Crowbar
barclamp entry and
the proposal.
In the
section, click to edit the configuration file.Search for the following entry:
"crowbar": { "password": "crowbar"
Change the password.
Confirm your change by clicking
and .After logging in to Crowbar, you will see a navigation bar at the top-level row. Its menus and the respective views are described in the following sections.
This is the default view after logging in to the Crowbar Web interface. The Dashboard shows the groups (which you can create to arrange nodes according to their purpose), which nodes belong to each group, and which state the nodes and groups are in. In addition, the total number of nodes is displayed in the top-level row.
The color of the dot in front of each node or group indicates the status. If the dot for a group shows more than one color, hover the mouse pointer over the dot to view the total number of nodes and the statuses they are in.
Gray means the node is being discovered by the Administration Server or that there is no up-to-date information about a deployed node. If the status is shown for a node longer than expected, check if the chef-client is still running on the node.
Yellow means the node has been successfully
Discovered
. As long as the node has not been
allocated, yet, the dot will flash. A solid (non-flashing) yellow dot
indicates that the node has been allocated, but installation has not
yet started.
Flashing from yellow to green means the node has been allocated and is currently being installed.
Solid green means the node is in status Ready
.
Red means the node is in status Problem
.
During the initial state of the setup, the Dashboard only shows
one group called sw_unknown
into which the
Administration Server is automatically sorted. Initially, all nodes (except
the Administration Server) are listed with their MAC address as a name.
However, it is recommended to create an alias for each node. This makes
it easier to identify the node in the admin network and on the
Dashboard. For details on how to create groups, how to assign nodes
to a group and how to create node aliases, see Section 9.2, “Node Installation”.
This screen allows you to edit multiple nodes at once instead of editing them individually. It lists all nodes, including OpenStack Cloud network), , , (the operating system that is going to be installed on the node), (if available), and allocation status. You can toggle the list view between or nodes.
(in form of the MAC address), configuration, (used within the admin network), (name used outside of the SUSEFor details on how to fill in the data for all nodes and how to start the installation process, see Section 9.2, “Node Installation”.
This menu entry only appears if your cloud contains a High Availability setup. The overview shows all clusters in your setup, including the
that are members of the respective cluster and the assigned to the cluster. It also shows if a cluster contains and which roles are assigned to the remote nodes.This overview shows which roles have been deployed on which node(s). The roles are grouped according to the service to which they belong. You cannot edit anything here. To change role deployment, you need to edit and redeploy the respective barclamp as described in Chapter 10, Deploying the OpenStack Services.
This screen shows a list of all available barclamp proposals, including their Section 8.3, “Deploying Barclamp Proposals”.
, and a short . From here, you can individual barclamp proposals as described inThis screen only shows the barclamps that are included with the core Crowbar framework. They contain general recipes for setting up and configuring all nodes. From here, you can
individual barclamp proposals.This screen only shows the barclamps that are dedicated to OpenStack service deployment and configuration. From here, you can individual barclamp proposals.
If barclamps are applied to one or more nodes that are nor yet available for deployment (for example, because they are rebooting or have not been fully installed yet), the proposals will be put in a queue. This screen shows the proposals that are
or .The supportconfig
TAR
archive. The supportconfig
archive contains system
information such as the current kernel version being used, the hardware, RPM
database, partitions, and the most important log files for analysis of any
problems. To access the export options, click . After the export has been successfully finished, the
screen will show any files that are
available for download.
This screen shows an overview of the mandatory, recommended and optional repositories for all architectures of SUSE OpenStack Cloud. On each reload of the screen the Crowbar Web interface checks the availability and status of the repositories. If a mandatory repository is not present, it is marked red in the screen. Any repositories marked green are usable and available to each node in the cloud. Usually, the available repositories are also shown as in the rightmost column. This means that the managed nodes will automatically be configured to use this repository. If you disable the check box for a repository, managed nodes will not use that repository.
You cannot edit any repositories in this screen. If you need additional, third-party
repositories (or want to modify the repository metadata), edit
/etc/crowbar/repos.yml
. Find an example of a repository
definition below:
suse-12.2: x86_64: Custom-Repo-12.2: url: 'http://example.com/12-SP2:/x86_64/custom-repo/' ask_on_error: true # sets the ask_on_error flag in # the autoyast profile for that repo priority: 99 # sets the repo priority for zypper
Alternatively, use YaST Crowbar module to add or edit repositories as described in Section 7.4, “. ”
This screen allows you to run
swift-dispersion-report
on the node or nodes
to which it has been deployed. Use this tool to measure the
overall health of the swift cluster. For details, see http://docs.openstack.org/liberty/config-reference/content/object-storage-dispersion.html.
This screen lets you create a backup of the Administration Server and download it. You can also restore from a backup or upload a backup image from your local file system. For details, see Section 11.5, “Backing Up and Restoring the Administration Server”.
SUSE OpenStack Cloud can communicate with a Cisco UCS Manager instance via its XML-based API server to perform the following functions:
Instantiate UCS service profiles for Compute Nodes and Storage Nodes from predefined UCS service profile templates
Reboot, start, and stop nodes.
The following prerequisites need to be fulfilled on the Cisco UCS side:
Templates for Compute Nodes and Storage Nodes need to be created. These
service profile templates will be used for preparing systems as
SUSE OpenStack Cloud nodes. Minimum requirements are a processor supporting
AMD-V or Intel-VT, 8 GB RAM, one network interface and at least 20 GB
of storage (more for Storage Nodes). The templates need to be named
suse-cloud-compute
and suse-cloud-storage
.
A user account with administrative permissions needs to be created for communicating with SUSE OpenStack Cloud. The account needs to have access to the service profile templates listed above. It also need permission to create service profiles and associate them with physical hardware.
To initially connect to the Cisco UCS Manager provide the login
credentials of the user account mentioned above. The http://UCSMANAGERHOST/nuova
. Click
to connect. When connected, you will see a
list of servers and associated actions. Applying an action with the
button can take up to several minutes.
From this screen, you can access HTML and PDF versions of the SUSE OpenStack Cloud manuals that are installed on the Administration Server.
Barclamps are a set of recipes, templates, and installation instructions. They are used to automatically install OpenStack components on the nodes. Each barclamp is configured via a so-called proposal. A proposal contains the configuration of the service(s) associated with the barclamp and a list of machines onto which to deploy the barclamp.
Most barclamps consist of two sections:
Lets you change the barclamp's configuration. You can do so either by editing the respective Web forms (
view) or by switching to the view, which exposes all configuration options for the barclamp. In the view, you directly edit the configuration file.Before you switch to
view or back again to view, your changes. Otherwise they will be lost.Lets you choose onto which nodes to deploy the barclamp. On the left-hand side, you see a list of
. The right-hand side shows a list of roles that belong to the barclamp.Assign the nodes to the roles that should be deployed on that node. Some barclamps contain roles that can also be deployed to a cluster. If you have deployed the Pacemaker barclamp, the “normal” nodes and Pacemaker remote nodes. See Section 2.6.3, “High Availability of the Compute Node(s)” for the basic details.
section additionally lists and in this case. The latter are clusters that contain bothClusters (or clusters with remote nodes) cannot be assigned to roles that need to be deployed on individual nodes. If you try to do so, the Crowbar Web interface shows an error message.
If you assign a cluster with remote nodes to a role that can only be applied to “normal” (Corosync) nodes, the role will only be applied to the Corosync nodes of that cluster. The role will not be applied to the remote nodes of the same cluster.
The following procedure shows how to generally edit, create and deploy barclamp proposals. For the description and deployment of the individual barclamps, see Chapter 10, Deploying the OpenStack Services.
Log in to the Crowbar Web interface.
Click
and select . Alternatively, filter for categories by selecting either or .To create a new proposal or edit an existing one, click
or next to the respective barclamp.Change the configuration in the
section:Change the available options via the Web form.
To edit the configuration file directly, first save changes made in the Web form. Click then
to edit the configuration in the editor view.After you have finished,
your changes. (They are not applied yet).Assign nodes to a role in the
section of the barclamp. By default, one or more nodes are automatically pre-selected for available roles.If this pre-selection does not meet your requirements, click the
icon next to the role to remove the assignment.To assign a node or cluster of your choice, select the item you want from the list of nodes or clusters on the left-hand side, then drag and drop the item onto the desired role name on the right.
Do not drop a node or cluster onto the text box—this is used to filter the list of available nodes or clusters!
To save your changes without deploying them yet, click
.Deploy the proposal by clicking
.If you deploy a proposal onto a node where a previous one is still active, the new proposal will overwrite the old one.
Deploying a proposal might take some time (up to several minutes). Always wait until you see the message “Successfully applied the proposal” before proceeding to the next proposal.
A proposal (that has not been deployed yet), can be deleted in the Section 8.3.3, “Deleting a Proposal That Already Has Been Deployed”.
view by clicking . To delete a proposal that has already been deployed, seeA deployment failure of a barclamp may leave your node in an inconsistent state. If deployment of a barclamp fails:
Fix the reason that has caused the failure.
Re-deploy the barclamp.
For help, see the respective troubleshooting section at OpenStack Node Deployment.
To delete a proposal that has already been deployed, you first need to
it.Log in to the Crowbar Web interface.
Click
› .Click
to open the editing view.Click
and confirm your choice in the following pop-up.Deactivating a proposal removes the chef role from the nodes, so the routine that installed and set up the services is not executed anymore.
Click
confirm your choice in the following pop-up.This removes the barclamp configuration data from the server.
However, deactivating and deleting a barclamp that already had been deployed does not remove packages installed when the barclamp was deployed. Nor does it stop any services that were started during the barclamp deployment. On the affected node, proceed as follows to undo the deployment:
Stop the respective services:
root #
systemctl stop service
Disable the respective services:
root #
systemctl disable service
Uninstalling the packages should not be necessary.
When a proposal is applied to one or more nodes that are nor yet available for deployment (for example, because they are rebooting or have not been yet fully installed), the proposal will be put in a queue. A message like
Successfully queued the proposal until the following become ready: d52-54-00-6c-25-44
will be shown when having applied the proposal. A new button
will also become available. Use it to cancel the deployment of the proposal by removing it from the queue.