Chapter 2. Considerations and Requirements

Contents

2.1. Network
2.2. Product and Update Repositories
2.3. Storage
2.4. SSL Encryption
2.5. Hardware Requirements
2.6. Summary: Considerations and Requirements

Before deploying SUSE Cloud, there are a few requirements to be met and considerations to be made. Make sure to thoroughly read this chapter—some decisions need to be made before deploying SUSE Cloud, since you cannot change them afterwards.

2.1. Network

SUSE Cloud requires a complex network setup consisting of several networks that are configured during installation. These networks are for exclusive cloud usage. In order to access them from an existing network, a router is needed.

The network configuration on the nodes in the SUSE Cloud network is entirely controlled by Crowbar. Any network configuration not done with Crowbar (e.g. with YaST) will automatically be overwritten. Once the cloud is deployed, network settings cannot be changed anymore!

Figure 2.1. SUSE Cloud Network: Overview

SUSE Cloud Network: Overview

The following networks are defined when setting up SUSE Cloud. The IP addresses listed are the default addresses and can be changed using the YaST Crowbar module (see Section 3.1.9, “Crowbar Setup”).

Admin Network (192.168.124/24)

A private network to access the Administration Server and all nodes for administration purposes. The default setup lets you also access the BMC (Baseboard Management Controller) data via IPMI (Intelligent Platform Management Interface) from this network. If required, BMC access can be swapped to a separate network.

To access this network, you have the following options:

  • do not allow access from the outside and keep the admin network completely separated

  • allow access from a single network (e.g. your company's administration network) via the bastion network option configured on an additional network card with a fixed IP address

  • allow access from one or more networks via a gateway

Storage Network (192.168.125/24)

Private, SUSE Cloud internal virtual network. This network is used by Ceph, and Swift, only. It should not be accessed by users.

Private Network (nova-fixed, 192.168.123/24)

Private, SUSE Cloud internal virtual network. This network is used for inter-instance communication only. The gateway required is also automatically provided by SUSE Cloud.

Public Network (nova-floating, public, 192.168.122/24)

The only public network provided by SUSE Cloud. You can access the Nova Dashboard as well as instances (provided they have been equipped with a floating IP) on this network. This network can only be accessed via a gateway, which needs to be provided externally. All SUSE Cloud users and administrators need to be able to access the public network.

[Note]No IPv6 support

As of SUSE Cloud 1.0, IPv6 is not supported. This applies to the cloud internal networks as well as to the instances.

The following diagram shows the SUSE Cloud network in more detail. It demonstrates how the OpenStack nodes and services use the different networks.

Figure 2.2. SUSE Cloud Network: Details

SUSE Cloud Network: Details

2.1.1. Network Address Allocation

The default networks set up in SUSE Cloud are class C networks with 256 IP addresses each. This limits the maximum number of instances that can be started simultaneously. Addresses within the networks are allocated as outlined in the following table. Use the YaST Crowbar module to customize (see Section 3.1.9, “Crowbar Setup”). The .255 address for each network is always reserved as the broadcast address. This assignment cannot be changed.

Table 2.1.  192.168.124.0/24 (Admin/BMC) Network Address Allocation

Function

Address

Remark

router

192.168.124.1

Provided externally.

admin

192.168.124.10 - 192.168.124.11

Fixed addresses reserved for the Administration Server.

dhcp

192.168.124.21 - 192.168.124.80

Address range reserved for node allocation/installation. Determines the maximum number of parallel allocations/installations.

host

192.168.124.81 - 192.168.124.160

Fixed addresses for the OpenStack nodes. Determines the maximum number of OpenStack nodes that can be deployed.

bmc vlan host

192.168.124.161

Fixed address for the BMC VLAN.

bmc host

192.168.124.162 - 192.168.124.240

Fixed addresses for the OpenStack nodes. Determines the maximum number of OpenStack nodes that can be deployed.

switch

192.168.124.241 - 192.168.124.250


Table 2.2.  192.168.125/24 (Storage) Network Address Allocation

Function

Address

Remark

host

192.168.125.10 - 192.168.125.239


Table 2.3.  192.168.123/24 (Private Network/nova-fixed) Network Address Allocation

Function

Address

Remark

router

192.168.123.1 - 192.168.123.49

Each Compute Node also acts as a router for it's instances, getting an address from this range assigned. This effectively limits the maximum number of Compute Nodes that can be deployed with SUSE Cloud to 49.

dhcp

192.168.123.50 - 192.168.123.254

Address range for instances.


Table 2.4.  192.168.122/24 (Public Network nova-floating, public) Network Address Allocation

Function

Address

Remark

public host

192.168.122.2 - 192.168.122.49

Public address range for external SUSE Cloud services such as the Dashboard or the API.


2.1.2. Network Modes

SUSE Cloud supports different network modes: single, dual, and teaming. As of SUSE Cloud 1.0 the networking mode is applied to all nodes as well as the Administration Server. That means that all machines need to meet the hardware requirements for the chosen mode. The network mode can be configured using the YaST Crowbar module (Section 3.1.9, “Crowbar Setup”). The network mode cannot be changed once the cloud is deployed.

Other, more flexible network mode setups can be configured by manually editing the Crowbar network configuration files. See the documentation on the Crowbar wiki (https://github.com/dellcloudedge/crowbar/wiki) for more information. SUSE can assist you in creating a custom setup within the scope of a Level 3 support contract.

2.1.2.1. Single Network Mode

In single mode you just use one ethernet card for all the traffic:

2.1.2.2. Dual Network Mode

Dual mode needs two ethernet cards and allows you to completely separate traffic to/from the Admin Network and to/from the public network:

2.1.2.3. Teaming Network Mode

Teaming mode is almost identical to single mode, except for the fact that you combine several ethernet cards to a bond in order to increase the performance. Teaming mode needs two or more ethernet cards.

2.1.3. Accessing the Admin Network via a Bastion Network

If you want to allow to access the cloud's admin network from another network, you can do so by providing an external gateway. This option offers maximum flexibility, but requires additional machines and may be less secure than you require. Therefore SUSE Cloud offers a second option for accessing a single external network (e.g. a dedicated server administration network): the bastion network. You just need a dedicated ethernet card and a static IP address from the external network to set it up.

2.1.4. DNS and Hostnames

The Administration Server acts as a name server for all nodes in the cloud. If you allow access to the admin network from outside, you may want to add additional name servers to your network setup prior to deploying SUSE Cloud. If additional name servers are found on cloud deployment, the name server on the Administration Server will automatically be configured to forward requests for non-local records to those servers.

The Administration Server needs to be configured to have a full qualified hostname. This hostname must not be changed after SUSE Cloud has been deployed. The OpenStack nodes will be named after their MAC address by default, but you can provide aliases which are easier to remember when allocating the nodes. The aliases for the OpenStack nodes can be changed any time. It is useful to have a list of MAC addresses and the intended use of the corresponding host at hand when deploying the OpenStack nodes.

2.2. Product and Update Repositories

The Administration Server as well as the OpenStack nodes need to get security updates and patches for the operating system (SUSE Linux Enterprise Server) as well as for SUSE Cloud itself. Furthermore product repositories for SUSE Linux Enterprise Server and SUSE Cloud are needed as an installation source for the OpenStack nodes. In SUSE Cloud the Administration Server is designed to work as the single source for all repositories.

While the product repositories do not change, the update repositories need to be regularly updated. Depending on your network setup there are several possibilities to provide up-to-date repositories on the Administration Server:

Sneakernet

If you choose to completely seal off your admin network from all other networks, you need to manually update the repositories from removable media.

Installing a Subscription Management Tool (SMT) Server on the Administration Server

The SMT server, a free add-on product for SUSE Linux Enterprise Server, regularly synchronizes repository data from Novell Customer Center with your local host. Installing the SMT server on the Administration Server is recommended if you do not have access to update repositories from elsewhere within your organization. This option requires the Administration Server to be able to access the Internet. Subscription Management Tool 11 SP2 is available from http://www.novell.com/linux/smt/.

Utilizing Existing Repositories

If you can access existing repositories from within your company network from the Administration Server, you can either mount or sync these repositories to the required locations on the Administration Server.

As of SUSE Cloud 1.0, the following update repositories need to be mirrored:

  • SLES11-SP2-Core

  • SLES11-SP2-Updates

  • SLES11-SP1-Pool

  • SLES11-SP1-Updates

  • SUSE-Cloud-1.0-Pool

  • SUSE-Cloud-1.0-Updates

  • SLES11-SMT-SP2-Pool (only needed when the SMT is installed)

  • SLES11-SMT-SP2-Updates (only needed when the SMT is installed)

In addition to the update repositories you also need to mirror the contents of the product media (SUSE Linux Enterprise Server 11 SP2 and SUSE Cloud 1.0) to your local disk.

2.3. Storage

When talking about storage on SUSE Cloud, there are two completely different aspects to discuss: the block and object storage services SUSE Cloud offers on the one hand and the hardware related storage aspects on the different node types.

2.3.1. Cloud Storage Services

As mentioned above, SUSE Cloud offers two different types of storage services: object and block storage. Object storage lets you upload and download files (similar to an FTP server), whereas a block storage provides mountable devices (similar to a hard-disk partition). Furthermore SUSE Cloud provides a repository to store the virtual disk images used to start instances.

Object Storage with Swift

The object OpenStack storage service is called Swift. Swift needs to be deployed on dedicated nodes where no other cloud services run. In order to be able to store the objects redundantly, it is required to deploy at least two Swift nodes. SUSE Cloud is configured to always use all unused disks on a node for storage. Offering object storage with Swift is optional.

Block Storage

Block storage on SUSE Cloud is provided by Nova Volume. By default Nova Volume uses an LVM backend with iSCSI. This default setup utilizes a single device on the Controller Node—using a RAID for this purpose is strongly recommended.

Alternatively, Nova Volume can use Ceph RBD as a backend. Ceph offers data security and speed by storing the devices redundantly on different servers. Ceph needs to be deployed on dedicated nodes where no other cloud services run. In order to be able to store the objects redundantly, it is required to deploy at least two Ceph nodes. You can configure which devices Ceph uses for storage.

[Important]Ceph not Supported

As of SUSE Cloud 1.0, Ceph is not officially supported but rather included as a technical preview, so using Nova Volume instead is recommended.

The Glance Image Repository

Glance provides a catalog and repository for virtual disk images used to start the instances. Glance is usually installed on the Controller Node. The image repository resides in a directory on the file system by default—it is recommended to mount a partition or volume to that directory.

2.3.2. Storage Hardware Requirements

Apart from sufficient disk space to install the SUSE Linux Enterprise Server operating system, each node in SUSE Cloud has to store additional data.Requirements and recommendations for the various node types are listed below.

[Important]Choose a Hard Disk for the Operating System Installation

The operating system will always be installed on the first hard disk, the one that is recognized as /dev/sda. This is the disk that is listed first in the BIOS, the one from which the machine will boot. If you have nodes with a certain hard disk you want the operating system to be installed on, make sure it will be recognized as the first disk.

2.3.2.1. Administration Server

If you store the update repositories directly on the Administration Server (see Section 2.2, “Product and Update Repositories” for details), it is recommended to mount /srv to a separate partition or volume with a minimum of 30 GB space.

2.3.2.2. Controller Node

The virtual disk image repository resides under /var/lib/glance/images by default. It is recommended to mount a separate partition or volume into this directory that provides enough space to host all virtual disk images needed.

Unless deploying the Ceph RDB service as a backend for Nova Volume (which is currently not supported, but included as a technical preview), it uses LVM with iSCSI on the Controller Node. This setup allows to use only one device, therefore it is highly recommended to provide a RAID with sufficient disk space.

2.3.2.3. Compute Nodes

Unless an instance is started via Boot from Volume, it is started with at least one disk—a copy of the image from which it has been started. Depending on the flavor you start, the instance may also have a second, so-called ephemeral disk. The size of the root disk depends on the image itself, while ephemeral disks are always created as sparse image files that grow (up to a defined size) when being filled. By default ephemeral disks have a size of 10 GB.

Both disks, root images and ephemeral disk, are directly bound to the instance and are deleted when the instance is terminated. Therefore these disks are bound to the Compute Node on which the instance has been started. The disks are created under /var/lib/nova on the Compute Node. Your Compute Nodes should be equipped with enough disk space to store the root images and ephemeral disks.

[Note]Ephemeral Disks vs. Block Storage

Do not confuse ephemeral disks with persistent block storage. In addition to an ephemeral disk, which is automatically provided with most instance flavors, you can optionally add a persistent storage device provided by Nova Volume. Ephemeral disks are deleted when the instance terminates, while persistent storage devices can be reused in another instance.

The maximum disk space required on a compute node depends on the available flavors. A flavor specifies the number of CPUs, as well as RAM and disk size of an instance. Several flavors ranging from tiny (1 CPU, 2512 MB RAM, no ephemeral disk) to xlarge (8 CPUs, 8 GB RAM, 10 GB ephemeral disk) are available by default. Adding custom flavors as well as editing and deleting existing flavors is also supported.

To calculate the minimum disk space needed on a compute node, you need to determine the highest "RAM to disk space" ratio from your flavors. Example:

Flavor small: 2 GB RAM, 100 GB ephemeral disk => 50 GB disk /1 GB RAM
Flavor large: 8 GB RAM, 200 GB ephemeral disk => 25 GB disk /1 GB RAM

So, 50 GB disk /1 GB RAM is the ratio that matters. If you multiply that value by the amount of RAM in GB available on your compute node, you have the minimum disk space required by ephemeral disks. Pad that value with sufficient space for the root disks plus a buffer that enables you to create flavors with a higher RAM to disk ratio in the future.

[Warning]Overcommitting Disk Space

The scheduler that decides in which node an instance is started does not check for available disk space. If there is no disk space left on a compute node, this will not only cause data loss on the instances, but the compute node itself will also stop operating. Therefore you must make sure all compute nodes are equipped with enough hard disk space!

2.3.2.4. Storage Nodes

The block-storage service Ceph and the object storage service Swift need to be deployed onto dedicated nodes—it is not possible to mix these services. Each storage service requires at least two machines (more are recommended) to be able to store data redundantly.

Each Ceph/Swift Storage Node needs at least two hard disks. The first one (/dev/sda) will be used for the operating system installation, while the others can be used for storage purposes. While you can configure which devices Ceph uses for storage, Swift always uses all devices.

[Important]Ceph not Supported

As of SUSE Cloud 1.0, Ceph is not officially supported but rather included as a technical preview, so using Nova Volume instead is recommended.

2.4. SSL Encryption

Whenever non-public data travels over a network it needs to be encrypted. Encryption protects the integrity and confidentiality of data. Therefore you should enable SSL support when deploying SUSE Cloud to production (it is not enabled by default). The following services (and their APIs if available) can make use of SSL:

  • Keystone

  • Glance

  • Nova

  • VNC

  • Nova Dashboard

Each service requires valid certificates signed by a trusted third party. You may either use the same certificates for all services or use dedicated certificates for each service. See http://www.suse.com/documentation/sles11/book_sle_admin/data/sec_apache2_ssl.html for instructions on how to create certificates and get them signed by a trusted organization.

2.5. Hardware Requirements

Precise hardware requirements can only be listed for the Administration Server and the OpenStack Controller Node. The requirements of the OpenStack Compute and Storage Nodes depends on the number of concurrent instances and their virtual hardware equipment.

The minimum number of machines required for a SUSE Cloud setup featuring all services is seven: one Administration Server, one Controller Node, one Compute Node, and four Storage Nodes. In addition to that, a gateway providing access to the public network is required.

[Important]Physical Machines and Architecture

All SUSE Cloud nodes need to be physical machines. Although the Administration Server and the Controller Node can be virtualized in test environments, this is not supported for production systems.

SUSE Cloud currently only runs on x86_64 hardware.

2.5.1. Administration Server

  • Architecture: x86_64

  • RAM: at least 2 GB, 4 GB recommended

  • Hard disk: at least 40 GB. It is recommended to put /srv on a separate partition with at least 30 GB space, unless you mount the update repositories from another server (see Section 2.2, “Product and Update Repositories” for details).

  • Number of network cards: 1 for single mode, 2 for dual mode, 2 or more for team mode. Additional networks such as the bastion network and/or a separate BMC network each need an additional network card. See Section 2.1, “Network” for details.

2.5.2. Controller Node

2.5.3. Compute Node

The Compute Nodes need to be equipped with a sufficient amount of RAM and CPUs, matching the numbers required by the maximum number of instances running concurrently. An instance started in SUSE Cloud cannot share resources from several physical nodes, but rather uses the resources of the node on which it was started. So if you offer a flavor (see Flavor for a definition) with 8 CPUs and 12 GB RAM, at least one of your nodes should be able to provide these resources.

See Section 2.3.2.3, “Compute Nodes” for storage requirements.

2.5.4. Storage Node

The Storage Nodes are sufficiently equipped with a single CPU and 1 or 2 GB of RAM. See Section 2.3.2.4, “Storage Nodes” for storage requirements.

2.5.5. Software Requirements

The following software requirements need to be met in order to install SUSE Cloud:

  • SUSE Linux Enterprise Server 11 SP2 installation media (ISO image, included in the SUSE Cloud Administration Server subscription)

  • Access to the SUSE Linux Enterprise Server 11 SP2 Update repositories (either by registering SUSE Linux Enterprise Server 11 SP2 or via an existing SMT server).

  • SUSE Cloud installation media (ISO image).

  • A SUSE/Novell account (for product registration and SMT setup). If you do not already have one, go to http://www.suse.com/login to create it.

  • Optional: Subscription Management Tool 11 SP2 installation media. A free download is available on http://www.novell.com/linux/smt/. See Section 2.2, “Product and Update Repositories”.

2.6. Summary: Considerations and Requirements

As outlined above, there are some important considerations to be made before deploying SUSE Cloud. The following briefly summarizes what was discussed in detail in this chapter. Keep in mind that as of SUSE Cloud 1.0 it is not possible to change some aspects such as the network setup once SUSE Cloud is deployed!

Network

  • If you do not want to stick with the default networks and addresses, define custom networks and addresses. You need four different networks, at least three of them VLANs. If you need to separate the admin and the BMC network, a fifth network is required. Class C networks are sufficient. See Section 2.1, “Network” for details.

  • Determine how to allocate addresses from your network. Make sure not to allocate IP addresses twice. See Section 2.1.1, “Network Address Allocation” for the default allocation scheme.

  • Define which network mode to use. Keep in mind that all machines within the cloud (including the Administration Server) will be set up with the chosen mode and therefore need to meet the hardware requirements. See Section 2.1.2, “Network Modes” for details.

  • Define how to access the admin and BMC network(s): no access from the outside (no action is required), via an external gateway (gateway needs to be provided), or via bastion network. See Section 2.1.3, “Accessing the Admin Network via a Bastion Network” for details.

  • Provide a gateway to access the public network (public, nova-floating).

  • Make sure the admin server's hostname is correctly configured (hostname -f needs to return a full qualified hostname).

  • Prepare a list of MAC addresses and the intended use of the corresponding host for all OpenStack nodes.

Update Repositories

  • Depending on your network setup you have different options on how to provide up-to-date update repositories for SUSE Linux Enterprise Server and SUSE Cloud on the Administration Server: Sneakernet, installing Subscription Management Tool, syncing data with an existing repository, or mounting remote repositories. Choose the option that best matches your needs.

Storage

  • Decide whether you want to deploy the object storage service Swift. If so, you need to deploy at least two nodes with sufficient disk space exclusively dedicated to Swift.

  • Decide whether to use Nova Volume with Ceph as backend for block storage (not supported). If deploying Ceph, you need to deploy at least two nodes with sufficient disk space exclusively dedicated to it. If you choose not to deploy Ceph and use the default setup for Nova Volume (recommended), your Controller Node needs to be equipped with additional disk space (a RAID is strongly recommended).

    [Important]Ceph not Supported

    As of SUSE Cloud 1.0, Ceph is not officially supported but rather included as a technical preview, so using Nova Volume instead is recommended.

  • Optionally, provide a volume for storing the Glance image repository. Doing so is recommended.

  • Make sure all nodes are equipped with sufficient hard disk space.

SSL Encryption

  • Decide whether to use different SSL certificates for the services and the API or whether to use a single certificate.

  • Get one or more SSL certificates certified by a trusted third party source.

Hardware and Software Requirements

  • Make sure the hardware requirements for the different node types are met.

  • Make sure to have all required software at hand.


SUSE Cloud Deployment Guide 1.0