OpenStackClient project provides a unified command-line client, which
enables you to access the project API through easy-to-use commands.
Also, most OpenStack project provides a command-line client for each service.
For example, the Compute service provides a nova
command-line client.
You can run the commands from the command line, or include the commands within scripts to automate tasks. If you provide OpenStack credentials, such as your user name and password, you can run these commands on any computer.
Internally, each command uses cURL command-line tools, which embed API requests. OpenStack APIs are RESTful APIs, and use the HTTP protocol. They include methods, URIs, media types, and response codes.
OpenStack APIs are open-source Python clients, and can run on Linux or Mac OS X systems. On some client commands, you can specify a debug parameter to show the underlying API request for the command. This is a good way to become familiar with the OpenStack API calls.
As a cloud end user, you can use the OpenStack Dashboard to provision your own resources within the limits set by administrators. You can modify the examples provided in this section to create other types and sizes of server instances.
You can use the unified openstack
command (python-openstackclient)
for the most of OpenStack services.
For more information, see OpenStackClient document.
Unless the unified OpenStack Client (python-openstackclient) is used, the following table lists the command-line client for each OpenStack service with its package name and description.
Service |
Client |
Package |
Description |
---|---|---|---|
Application Catalog service |
murano |
python-muranoclient |
Creates and manages applications. |
Bare Metal service |
ironic |
python-ironicclient |
manages and provisions physical machines. |
Block Storage service |
cinder |
python-cinderclient |
Creates and manages volumes. |
Clustering service |
senlin |
python-senlinclient |
Creates and manages clustering services. |
Compute service |
nova |
python-novaclient |
Creates and manages images, instances, and flavors. |
Container Infrastructure Management service |
magnum |
python-magnumclient |
Creates and manages containers. |
Data Processing service |
sahara |
python-saharaclient |
Creates and manages Hadoop clusters on OpenStack. |
Database service |
trove |
python-troveclient |
Creates and manages databases. |
Deployment service |
fuel |
python-fuelclient |
Plans deployments. |
DNS service |
designate |
python-designateclient |
Creates and manages self service authoritative DNS. |
Image service |
glance |
python-glanceclient |
Creates and manages images. |
Key Manager service |
barbican |
python-barbicanclient |
Creates and manages keys. |
Monitoring |
monasca |
python-monascaclient |
Monitoring solution. |
Networking service |
neutron |
python-neutronclient |
Configures networks for guest servers. |
Object Storage service |
swift |
python-swiftclient |
Gathers statistics, lists items, updates metadata, and uploads, downloads, and deletes files stored by the Object Storage service. Gains access to an Object Storage installation for ad hoc processing. |
Orchestration service |
heat |
python-heatclient |
Launches stacks from templates, views details of running stacks including events and resources, and updates and deletes stacks. |
Rating service |
cloudkitty |
python-cloudkittyclient |
Rating service. |
Shared File Systems service |
manila |
python-manilaclient |
Creates and manages shared file systems. |
Telemetry service |
ceilometer |
python-ceilometerclient |
Creates and collects measurements across OpenStack. |
Telemetry v3 |
gnocchi |
python-gnocchiclient |
Creates and collects measurements across OpenStack. |
Workflow service |
mistral |
python-mistralclient |
Workflow service for OpenStack cloud. |
Install the prerequisite software and the Python package for each OpenStack client.
Most Linux distributions include packaged versions of the command-line clients that you can install directly, see Section 4.2.2.2, “Installing from packages”.
If you need to install the source package for the command-line package, the following table lists the software needed to run the command-line clients, and provides installation instructions as needed.
Prerequisite |
Description |
---|---|
Python 2.7 or later |
Supports Python 2.7, 3.4, and 3.5. |
setuptools package |
Installed by default on Mac OS X. Many Linux distributions provide packages to make setuptools easy to install. Search your package manager for setuptools to find an installation package. If you cannot find one, download the setuptools package directly from https://pypi.python.org/pypi/setuptools. The recommended way to install setuptools on Microsoft Windows is to follow the documentation provided on the setuptools website (https://pypi.python.org/pypi/setuptools). Another option is to use the unofficial binary installer maintained by Christoph Gohlke (http://www.lfd.uci.edu/~gohlke/pythonlibs/#setuptools). |
pip package |
To install the clients on a Linux, Mac OS X, or Microsoft Windows system, use pip. It is easy to use, ensures that you get the latest version of the clients from the Python Package Index, and lets you update or remove the packages later on. Since the installation process compiles source files, this requires the related Python development package for your operating system and distribution. Install pip through the package manager for your system: MacOS # easy_install pip Microsoft Windows Ensure that the C:\>easy_install pip Another option is to use the unofficial binary installer provided by Christoph Gohlke (http://www.lfd.uci.edu/~gohlke/pythonlibs/#pip). Ubuntu or Debian # apt install python-dev python-pip Note that extra dependencies may be required, per operating system, depending on the package being installed, such as is the case with Tempest. Red Hat Enterprise Linux, CentOS, or Fedora A packaged version enables you to use yum to install the package: # yum install python-devel python-pip There are also packaged versions of the clients available in RDO that enable yum to install the clients as described in Section 4.2.2.2, “Installing from packages”. SUSE Linux Enterprise Server A packaged version available in the Open Build Service enables you to use YaST or zypper to install the package. First, add the Open Build Service repository: # zypper addrepo -f obs://Cloud:OpenStack:Mitaka/SLE_12_SP1 Mitaka Then install pip and use it to manage client installation: # zypper install python-devel python-pip There are also packaged versions of the clients available that enable zypper to install the clients as described in Section 4.2.2.2, “Installing from packages”. openSUSE You can install pip and use it to manage client installation: # zypper install python-devel python-pip There are also packaged versions of the clients available that enable zypper to install the clients as described in Section 4.2.2.2, “Installing from packages”. |
The following example shows the command for installing the OpenStack client
with pip
, which supports multiple services.
# pip install python-openstackclient
The following individual clients are deprecated in favor of a common client.
Instead of installing and learning all these clients, we recommend
installing and using the OpenStack client. You may need to install an
individual project's client because coverage is not yet sufficient in the
OpenStack client. If you need to install an individual client's project,
replace the PROJECT
name in this pip install
command using the
list below.
# pip install python-PROJECTclient
barbican
- Key Manager Service API
ceilometer
- Telemetry API
cinder
- Block Storage API and extensions
cloudkitty
- Rating service API
designate
- DNS service API
fuel
- Deployment service API
glance
- Image service API
gnocchi
- Telemetry API v3
heat
- Orchestration API
magnum
- Container Infrastructure Management service API
manila
- Shared file systems API
mistral
- Workflow service API
monasca
- Monitoring API
murano
- Application catalog API
neutron
- Networking API
nova
- Compute API and extensions
sahara
- Data Processing API
senlin
- Clustering service API
swift
- Object Storage API
trove
- Database service API
Use pip to install the OpenStack clients on a Linux, Mac OS X, or Microsoft Windows system. It is easy to use and ensures that you get the latest version of the client from the Python Package Index. Also, pip enables you to update or remove a package.
Install each client separately by using the following command:
For Mac OS X or Linux:
# pip install python-PROJECTclient
For Microsoft Windows:
C:\>pip install python-PROJECTclient
RDO, openSUSE, SUSE Linux Enterprise, Debian, and Ubuntu have client packages
that can be installed without pip
.
On Red Hat Enterprise Linux, CentOS, or Fedora, use yum
to install
the clients from the packaged versions available in
RDO:
# yum install python-PROJECTclient
For Ubuntu or Debian, use apt-get
to install the clients from the
packaged versions:
# apt-get install python-PROJECTclient
For openSUSE, use zypper
to install the clients from the distribution
packages service:
# zypper install python-PROJECTclient
For SUSE Linux Enterprise Server, use zypper
to install the clients from
the distribution packages in the Open Build Service. First, add the Open
Build Service repository:
# zypper addrepo -f obs://Cloud:OpenStack:Mitaka/SLE_12_SP1 Mitaka
Then you can install the packages:
# zypper install python-PROJECTclient
To upgrade a client, add the --upgrade
option to the
pip install
command:
# pip install --upgrade python-PROJECTclient
To remove the client, run the pip uninstall
command:
# pip uninstall python-PROJECTclient
Before you can run client commands, you must create and source the
PROJECT-openrc.sh
file to set environment variables. See
.
Run the following command to discover the version number for a client:
$ PROJECT --version
For example, to see the version number for the openstack
client,
run the following command:
$ openstack --version openstack 3.2.0
To set the required environment variables for the OpenStack command-line
clients, you must create an environment file called an OpenStack rc
file, or openrc.sh
file. If your OpenStack installation provides
it, you can download the file from the OpenStack Dashboard as an
administrative user or any other user. This project-specific environment
file contains the credentials that all OpenStack services use.
When you source the file, environment variables are set for your current shell. The variables enable the OpenStack client commands to communicate with the OpenStack services that run in the cloud.
Defining environment variables using an environment file is not a common practice on Microsoft Windows. Environment variables are usually defined in the Git for Windows and using Git Bash to source the environment variables and to run all CLI commands.
› dialog box. One method for using these scripts as-is on Windows is to installLog in to the dashboard and from the drop-down list select the project for which you want to download the OpenStack RC file.
On the
tab, open the tab and click .On the PROJECT-openrc.sh
where PROJECT
is the name of the project for
which you downloaded the file.
Copy the PROJECT-openrc.sh
file to the computer from which you
want to run OpenStack commands.
For example, copy the file to the computer from which you want to upload
an image with a glance
client command.
On any shell from which you want to run OpenStack commands, source the
PROJECT-openrc.sh
file for the respective project.
In the following example, the demo-openrc.sh
file is sourced for
the demo project:
$ . demo-openrc.sh
When you are prompted for an OpenStack password, enter the password for
the user who downloaded the PROJECT-openrc.sh
file.
Alternatively, you can create the PROJECT-openrc.sh
file from
scratch, if you cannot download the file from the dashboard.
In a text editor, create a file named PROJECT-openrc.sh
and add
the following authentication information:
export OS_USERNAME=username export OS_PASSWORD=password export OS_TENANT_NAME=projectName export OS_AUTH_URL=https://identityHost:portNumber/v2.0 # The following lines can be omitted export OS_TENANT_ID=tenantIDString export OS_REGION_NAME=regionName export OS_CACERT=/path/to/cacertFile
Saving OS_PASSWORD
in plain text may bring a security risk.
You should protect the file or not save OS_PASSWORD
into
the file in the production environment.
On any shell from which you want to run OpenStack commands, source the
PROJECT-openrc.sh
file for the respective project. In this
example, you source the admin-openrc.sh
file for the admin
project:
$ . admin-openrc.sh
You are not prompted for the password with this method. The password
lives in clear text format in the PROJECT-openrc.sh
file.
Restrict the permissions on this file to avoid security problems.
You can also remove the OS_PASSWORD
variable from the file, and
use the --password
parameter with OpenStack client commands
instead.
You must set the OS_CACERT
environment variable when using the
https protocol in the OS_AUTH_URL
environment setting because
the verification process for the TLS (HTTPS) server certificate uses
the one indicated in the environment. This certificate will be used
when verifying the TLS (HTTPS) server certificate.
When you run OpenStack client commands, you can override some
environment variable settings by using the options that are listed at
the end of the help
output of the various client commands. For
example, you can override the OS_PASSWORD
setting in the
PROJECT-openrc.sh
file by specifying a password on a
openstack
command, as follows:
$ openstack --os-password PASSWORD server list
Where PASSWORD
is your password.
A user specifies their username and password credentials to interact with OpenStack, using any client command. These credentials can be specified using various mechanisms, namely, the environment variable or command-line argument. It is not safe to specify the password using either of these methods.
For example, when you specify your password using the command-line
client with the --os-password
argument, anyone with access to your
computer can view it in plain text with the ps
field.
To avoid storing the password in plain text, you can prompt for the OpenStack password interactively.
The cloud operator assigns roles to users. Roles determine who can upload and manage images. The operator might restrict image upload and management to only cloud administrators or operators.
You can upload images through the glance
client or the Image service
API. You can use the nova
client for the image management.
The latter provides mechanisms to list and delete images, set and delete
image metadata, and create images of a running instance or snapshot and
backup types.
After you upload an image, you cannot change it.
For details about image creation, see the Virtual Machine Image Guide.
To get a list of images and to get further details about a single
image, use openstack image list
and openstack image show
commands.
$ openstack image list +--------------------------------------+---------------------------------+--------+ | ID | Name | Status | +--------------------------------------+---------------------------------+--------+ | dfc1dfb0-d7bf-4fff-8994-319dd6f703d7 | cirros-0.3.2-x86_64-uec | active | | a3867e29-c7a1-44b0-9e7f-10db587cad20 | cirros-0.3.2-x86_64-uec-kernel | active | | 4b916fba-6775-4092-92df-f41df7246a6b | cirros-0.3.2-x86_64-uec-ramdisk | active | | d07831df-edc3-4817-9881-89141f9134c3 | myCirrosImage | active | +--------------------------------------+---------------------------------+--------+
$ openstack image show myCirrosImage +------------------+------------------------------------------------------+ | Field | Value | +------------------+------------------------------------------------------+ | checksum | ee1eca47dc88f4879d8a229cc70a07c6 | | container_format | ami | | created_at | 2016-08-11T15:07:26Z | | disk_format | ami | | file | /v2/images/d07831df-edc3-4817-9881-89141f9134c3/file | | id | d07831df-edc3-4817-9881-89141f9134c3 | | min_disk | 0 | | min_ram | 0 | | name | myCirrosImage | | owner | d88310717a8e4ebcae84ed075f82c51e | | protected | False | | schema | /v2/schemas/image | | size | 13287936 | | status | active | | tags | | | updated_at | 2016-08-11T15:20:02Z | | virtual_size | None | | visibility | private | +------------------+------------------------------------------------------+
When viewing a list of images, you can also use grep
to filter the
list, as follows:
$ openstack image list | grep 'cirros' | dfc1dfb0-d7bf-4fff-8994-319dd6f703d7 | cirros-0.3.2-x86_64-uec | active | | a3867e29-c7a1-44b0-9e7f-10db587cad20 | cirros-0.3.2-x86_64-uec-kernel | active | | 4b916fba-6775-4092-92df-f41df7246a6b | cirros-0.3.2-x86_64-uec-ramdisk | active |
To store location metadata for images, which enables direct file access for a client,
update the /etc/glance/glance-api.conf
file with the following statements:
show_multiple_locations = True
filesystem_store_metadata_file = filePath
where filePath points to a JSON file that defines the mount point for OpenStack images on your system and a unique ID. For example:
[{
"id": "2d9bb53f-70ea-4066-a68b-67960eaae673",
"mountpoint": "/var/lib/glance/images/"
}]
After you restart the Image service, you can use the following syntax to view the image's location information:
$ openstack --os-image-api-version 2 image show imageID
For example, using the image ID shown above, you would issue the command as follows:
$ openstack --os-image-api-version 2 image show 2d9bb53f-70ea-4066-a68b-67960eaae673
To create an image, use openstack image create
:
$ openstack image create imageName
To update an image by name or ID, use openstack image set
:
$ openstack image set imageName
The following list explains the optional arguments that you can use with
the create
and set
commands to modify image properties. For
more information, refer to the OpenStack Image command reference.
The following example shows the command that you would use to upload a CentOS 6.3 image in qcow2 format and configure it for public access:
$ openstack image create --disk-format qcow2 --container-format bare \ --public --file ./centos63.qcow2 centos63-image
The following example shows how to update an existing image with a properties that describe the disk bus, the CD-ROM bus, and the VIF model:
When you use OpenStack with VMware vCenter Server, you need to specify
the vmware_disktype
and vmware_adaptertype
properties with
openstack image create
.
Also, we recommend that you set the hypervisor_type="vmware"
property.
For more information, see Images with VMware vSphere
in the OpenStack Configuration Reference.
$ openstack image set \ --property hw_disk_bus=scsi \ --property hw_cdrom_bus=ide \ --property hw_vif_model=e1000 \ f16-x86_64-openstack-sda
Currently the libvirt virtualization tool determines the disk, CD-ROM,
and VIF device models based on the configured hypervisor type
(libvirt_type
in /etc/nova/nova.conf
file). For the sake of optimal
performance, libvirt defaults to using virtio for both disk and VIF
(NIC) models. The disadvantage of this approach is that it is not
possible to run operating systems that lack virtio drivers, for example,
BSD, Solaris, and older versions of Linux and Windows.
If you specify a disk or CD-ROM bus model that is not supported, see the ???. If you specify a VIF model that is not supported, the instance fails to launch. See the ???.
The valid model values depend on the libvirt_type
setting, as shown
in the following tables.
Disk and CD-ROM bus model values
libvirt_type setting |
Supported model values |
---|---|
qemu or kvm |
|
xen |
|
VIF model values
libvirt_type setting |
Supported model values |
---|---|
qemu or kvm |
|
xen |
|
vmware |
|
By default, hardware properties are retrieved from the image
properties. However, if this information is not available, the
libosinfo
database provides an alternative source for these
values.
If the guest operating system is not in the database, or if the use
of libosinfo
is disabled, the default system values are used.
Users can set the operating system ID or a short-id
in image
properties. For example:
$ openstack image set --property short-id=fedora23 \ name-of-my-fedora-image
Alternatively, users can set id
to a URL:
$ openstack image set \ --property id=http://fedoraproject.org/fedora/23 \ ID-of-my-fedora-image
You can upload ISO images to the Image service (glance). You can subsequently boot an ISO image using Compute.
In the Image service, run the following command:
$ openstack image create ISO_IMAGE --file IMAGE.iso \ --disk-format iso --container-format bare
Optionally, to confirm the upload in Image service, run:
$ openstack image list
If you encounter problems in creating an image in the Image service or Compute, the following information may help you troubleshoot the creation process.
Ensure that the version of qemu you are using is version 0.14 or
later. Earlier versions of qemu result in an unknown option -s
error message in the /var/log/nova/nova-compute.log
file.
Examine the /var/log/nova/nova-api.log
and
/var/log/nova/nova-compute.log
log files for error messages.
This section is intended to provide a series of commands a typical client of the API might use to create and modify an image.
These commands assume the implementation of the v2 Image API using the Identity Service for authentication and authorization. The X-Auth-Token header is used to provide the authentication token issued by the Identity Service.
The strings $OS_IMAGE_URL
and $OS_AUTH_TOKEN
represent variables
defined in the client's environment. $OS_IMAGE_URL
is the full path
to your image service endpoint, for example, http://example.com
.
$OS_AUTH_TOKEN
represents an auth token generated by the
Identity Service, for example, 6583fb17c27b48b4b4a6033fe9cc0fe0
.
$ curl -i -X POST -H "X-Auth-Token: $OS_AUTH_TOKEN" \ -H "Content-Type: application/json" \ -d '{"name": "Ubuntu 14.04", \ "tags": ["ubuntu", "14.04", "trusty"]}' \ $OS_IMAGE_URL/v2/images HTTP/1.1 201 Created Content-Length: 451 Content-Type: application/json; charset=UTF-8 Location: http://example.com:9292/v2/images /7b97f37c-899d-44e8-aaa0-543edbc4eaad Date: Fri, 11 Mar 2016 12:25:32 GMT { "id": "7b97f37c-899d-44e8-aaa0-543edbc4eaad", "name": "Ubuntu 14.04", "status": "queued", "visibility": "private", "protected": false, "tags": ["ubuntu", "14.04", "trusty"], "created_at": "2016-03-11T12:25:32Z", "updated_at": "2016-03-11T12:25:32Z", "file": "/v2/images/7b97f37c-899d-44e8-aaa0-543edbc4eaad/file", "self": "/v2/images/7b97f37c-899d-44e8-aaa0-543edbc4eaad", "schema": "/v2/schemas/image" }
$ curl -i -X PATCH -H "X-Auth-Token: $OS_AUTH_TOKEN" \ -H "Content-Type: application/json" \ -d '[{"op": "add", "path": "/login-user", "value": "root"}]' \ $OS_IMAGE_URL/v2/images/7b97f37c-899d-44e8-aaa0-543edbc4eaad HTTP/1.1 200 OK Content-Length: 477 Content-Type: application/json; charset=UTF-8 Date: Fri, 11 Mar 2016 12:44:56 GMT { "id": "7b97f37c-899d-44e8-aaa0-543edbc4eaad", "name": "Ubuntu 14.04", "status": "queued", "visibility": "private", "protected": false, "tags": ["ubuntu", "14.04", "trusty"], "login_user": "root", "created_at": "2016-03-11T12:25:32Z", "updated_at": "2016-03-11T12:44:56Z", "file": "/v2/images/7b97f37c-899d-44e8-aaa0-543edbc4eaad/file", "self": "/v2/images/7b97f37c-899d-44e8-aaa0-543edbc4eaad", "schema": "/v2/schemas/image" }
$ curl -i -X PUT -H "X-Auth-Token: $OS_AUTH_TOKEN" \ -H "Content-Type: application/octet-stream" \ --data-binary @/home/glance/ubuntu-14.04.qcow2 \ $OS_IMAGE_URL/v2/images/7b97f37c-899d-44e8-aaa0-543edbc4eaad/file HTTP/1.1 100 Continue HTTP/1.1 201 Created Content-Length: 0 Date: Fri, 11 Mar 2016 12:51:02 GMT
$ curl -i -X GET -H "X-Auth-Token: $OS_AUTH_TOKEN" \ $OS_IMAGE_URL/v2/images/7b97f37c-899d-44e8-aaa0-543edbc4eaad/file HTTP/1.1 200 OK Content-Type: application/octet-stream Content-Md5: 912ec803b2ce49e4a541068d495ab570 Transfer-Encoding: chunked Date: Fri, 11 Mar 2016 12:57:41 GMT
$ curl -i -X DELETE -H "X-Auth-Token: $OS_AUTH_TOKEN" \ $OS_IMAGE_URL/v2/images/7b97f37c-899d-44e8-aaa0-543edbc4eaad HTTP/1.1 204 No Content Content-Length: 0 Date: Fri, 11 Mar 2016 12:59:11 GMT
A volume is a detachable block storage device, similar to a USB hard
drive. You can attach a volume to only one instance. Use the openstack
client commands to create and manage volumes.
As an administrator, you can migrate a volume with its data from one location to another in a manner that is transparent to users and workloads. You can migrate only detached volumes with no snapshots.
Possible use cases for data migration include:
Bring down a physical storage device for maintenance without disrupting workloads.
Modify the properties of a volume.
Free up space in a thinly-provisioned back end.
Migrate a volume with the cinder migrate
command, as shown in the
following example:
$ cinder migrate --force-host-copy <True|False> --lock-volume <True|False> <volume> <host>
In this example, --force-host-copy True
forces the generic
host-based migration mechanism and bypasses any driver optimizations.
--lock-volume
applies to the available volume.
To determine whether the termination of volume migration caused by other
commands. True
locks the volume state and does not allow the
migration to be aborted.
If the volume has snapshots, the specified host destination cannot accept the volume. If the user is not an administrator, the migration fails.
This example creates a my-new-volume
volume based on an image.
List images, and note the ID of the image that you want to use for your volume:
$ openstack image list +--------------------------------------+---------------------------------+ | ID | Name | +--------------------------------------+---------------------------------+ | 8bf4dc2a-bf78-4dd1-aefa-f3347cf638c8 | cirros-0.3.4-x86_64-uec | | 9ff9bb2e-3a1d-4d98-acb5-b1d3225aca6c | cirros-0.3.4-x86_64-uec-kernel | | 4b227119-68a1-4b28-8505-f94c6ea4c6dc | cirros-0.3.4-x86_64-uec-ramdisk | +--------------------------------------+---------------------------------+
List the availability zones, and note the ID of the availability zone in which you want to create your volume:
$ openstack availability zone list +------+-----------+ | Name | Status | +------+-----------+ | nova | available | +------+-----------+
Create a volume with 8 gibibytes (GiB) of space, and specify the availability zone and image:
$ openstack volume create --image 8bf4dc2a-bf78-4dd1-aefa-f3347cf638c8 \ --size 8 --availability-zone nova my-new-volume +------------------------------+--------------------------------------+ | Property | Value | +------------------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2016-09-23T07:52:42.000000 | | description | None | | encrypted | False | | id | bab4b0e0-ce3d-4d57-bf57-3c51319f5202 | | metadata | {} | | multiattach | False | | name | my-new-volume | | os-vol-tenant-attr:tenant_id | 3f670abbe9b34ca5b81db6e7b540b8d8 | | replication_status | disabled | | size | 8 | | snapshot_id | None | | source_volid | None | | status | creating | | updated_at | None | | user_id | fe19e3a9f63f4a14bd4697789247bbc5 | | volume_type | lvmdriver-1 | +------------------------------+--------------------------------------+
To verify that your volume was created successfully, list the available volumes:
$ openstack volume list +--------------------------------------+---------------+-----------+------+-------------+ | ID | DisplayName | Status | Size | Attached to | +--------------------------------------+---------------+-----------+------+-------------+ | bab4b0e0-ce3d-4d57-bf57-3c51319f5202 | my-new-volume | available | 8 | | +--------------------------------------+---------------+-----------+------+-------------+
If your volume was created successfully, its status is available
. If
its status is error
, you might have exceeded your quota.
Cinder supports these three ways to specify volume type
during
volume creation.
volume_type
cinder_img_volume_type (via glance image metadata)
default_volume_type (via cinder.conf)
User can specify volume type
when creating a volume.
$ openstack volume create -h -f {json,shell,table,value,yaml} -c COLUMN --max-width <integer> --noindent --prefix PREFIX --size <size> --type <volume-type> --image <image> --snapshot <snapshot> --source <volume> --description <description> --user <user> --project <project> --availability-zone <availability-zone> --property <key=value> <name>
If glance image has cinder_img_volume_type
property, Cinder uses this
parameter to specify volume type
when creating a volume.
Choose glance image which has cinder_img_volume_type
property and create
a volume from the image.
$ openstack image list +----------------------------------+---------------------------------+--------+ | ID | Name | Status | +----------------------------------+---------------------------------+--------+ | 376bd633-c9c9-4c5d-a588-342f4f66 | cirros-0.3.4-x86_64-uec | active | | d086 | | | | 2c20fce7-2e68-45ee-ba8d- | cirros-0.3.4-x86_64-uec-ramdisk | active | | beba27a91ab5 | | | | a5752de4-9faf-4c47-acbc- | cirros-0.3.4-x86_64-uec-kernel | active | | 78a5efa7cc6e | | | +----------------------------------+---------------------------------+--------+ $ openstack image show 376bd633-c9c9-4c5d-a588-342f4f66d086 +------------------+-----------------------------------------------------------+ | Field | Value | +------------------+-----------------------------------------------------------+ | checksum | eb9139e4942121f22bbc2afc0400b2a4 | | container_format | ami | | created_at | 2016-10-13T03:28:55Z | | disk_format | ami | | file | /v2/images/376bd633-c9c9-4c5d-a588-342f4f66d086/file | | id | 376bd633-c9c9-4c5d-a588-342f4f66d086 | | min_disk | 0 | | min_ram | 0 | | name | cirros-0.3.4-x86_64-uec | | owner | 88ba456e3a884c318394737765e0ef4d | | properties | kernel_id='a5752de4-9faf-4c47-acbc-78a5efa7cc6e', | | | ramdisk_id='2c20fce7-2e68-45ee-ba8d-beba27a91ab5' | | protected | False | | schema | /v2/schemas/image | | size | 25165824 | | status | active | | tags | | | updated_at | 2016-10-13T03:28:55Z | | virtual_size | None | | visibility | public | +------------------+-----------------------------------------------------------+ $ openstack volume create --image 376bd633-c9c9-4c5d-a588-342f4f66d086 \ --size 1 --availability-zone nova test +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2016-10-13T06:29:53.688599 | | description | None | | encrypted | False | | id | e6e6a72d-cda7-442c-830f-f306ea6a03d5 | | multiattach | False | | name | test | | properties | | | replication_status | disabled | | size | 1 | | snapshot_id | None | | source_volid | None | | status | creating | | type | lvmdriver-1 | | updated_at | None | | user_id | 33fdc37314914796883706b33e587d51 | +---------------------+--------------------------------------+
If above parameters are not set, Cinder uses default_volume_type which is defined in cinder.conf during volume creation.
Example cinder.conf file configuration.
[default] default_volume_type = lvmdriver-1
Attach your volume to a server, specifying the server ID and the volume ID:
$ openstack server add volume 84c6e57d-a6b1-44b6-81eb-fcb36afd31b5 \ 573e024d-5235-49ce-8332-be1576d323f8 --device /dev/vdb
Show information for your volume:
$ openstack volume show 573e024d-5235-49ce-8332-be1576d323f8
The output shows that the volume is attached to the server with ID
84c6e57d-a6b1-44b6-81eb-fcb36afd31b5
, is in the nova availability
zone, and is bootable.
+------------------------------+-----------------------------------------------+ | Field | Value | +------------------------------+-----------------------------------------------+ | attachments | [{u'device': u'/dev/vdb', | | | u'server_id': u'84c6e57d-a | | | u'id': u'573e024d-... | | | u'volume_id': u'573e024d... | | availability_zone | nova | | bootable | true | | consistencygroup_id | None | | created_at | 2016-10-13T06:08:07.000000 | | description | None | | encrypted | False | | id | 573e024d-5235-49ce-8332-be1576d323f8 | | multiattach | False | | name | my-new-volume | | os-vol-tenant-attr:tenant_id | 7ef070d3fee24bdfae054c17ad742e28 | | properties | | | replication_status | disabled | | size | 8 | | snapshot_id | None | | source_volid | None | | status | in-use | | type | lvmdriver-1 | | updated_at | 2016-10-13T06:08:11.000000 | | user_id | 33fdc37314914796883706b33e587d51 | | volume_image_metadata |{u'kernel_id': u'df430cc2..., | | | u'image_id': u'397e713c..., | | | u'ramdisk_id': u'3cf852bd..., | | |u'image_name': u'cirros-0.3.2-x86_64-uec'} | +------------------------------+-----------------------------------------------+
To resize your volume, you must first detach it from the server. To detach the volume from your server, pass the server ID and volume ID to the following command:
$ openstack server remove volume 84c6e57d-a6b1-44b6-81eb-fcb36afd31b5 573e024d-5235-49ce-8332-be1576d323f8
This command does not provide any output.
List volumes:
$ openstack volume list +----------------+-----------------+-----------+------+-------------+ | ID | Display Name | Status | Size | Attached to | +----------------+-----------------+-----------+------+-------------+ | 573e024d-52... | my-new-volume | available | 8 | | | bd7cf584-45... | my-bootable-vol | available | 8 | | +----------------+-----------------+-----------+------+-------------+
Note that the volume is now available.
Resize the volume by passing the volume ID and the new size (a value greater than the old one) as parameters:
$ openstack volume set 573e024d-5235-49ce-8332-be1576d323f8 --size 10
This command does not provide any output.
When extending an LVM volume with a snapshot, the volume will be
deactivated. The reactivation is automatic unless
auto_activation_volume_list
is defined in lvm.conf
. See
lvm.conf
for more information.
To delete your volume, you must first detach it from the server. To detach the volume from your server and check for the list of existing volumes, see steps 1 and 2 in Section 4.7.5, “Resize a volume”.
Delete the volume using either the volume name or ID:
$ openstack volume delete my-new-volume
This command does not provide any output.
List the volumes again, and note that the status of your volume is
deleting
:
$ openstack volume list +----------------+-----------------+-----------+------+-------------+ | ID | Display Name | Status | Size | Attached to | +----------------+-----------------+-----------+------+-------------+ | 573e024d-52... | my-new-volume | deleting | 8 | | | bd7cf584-45... | my-bootable-vol | available | 8 | | +----------------+-----------------+-----------+------+-------------+
When the volume is fully deleted, it disappears from the list of volumes:
$ openstack volume list +----------------+-----------------+-----------+------+-------------+ | ID | Display Name | Status | Size | Attached to | +----------------+-----------------+-----------+------+-------------+ | bd7cf584-45... | my-bootable-vol | available | 8 | | +----------------+-----------------+-----------+------+-------------+
You can transfer a volume from one owner to another by using the
openstack volume transfer request create
command. The volume
donor, or original owner, creates a transfer request and sends the created
transfer ID and authorization key to the volume recipient. The volume
recipient, or new owner, accepts the transfer by using the ID and key.
The procedure for volume transfer is intended for tenants (both the volume donor and recipient) within the same cloud.
Use cases include:
Create a custom bootable volume or a volume with a large data set and transfer it to a customer.
For bulk import of data to the cloud, the data ingress system creates a new Block Storage volume, copies data from the physical device, and transfers device ownership to the end user.
While logged in as the volume donor, list the available volumes:
$ openstack volume list +-----------------+-----------------+-----------+------+-------------+ | ID | Display Name | Status | Size | Attached to | +-----------------+-----------------+-----------+------+-------------+ | 72bfce9f-cac... | None | error | 1 | | | a1cdace0-08e... | None | available | 1 | | +-----------------+-----------------+-----------+------+-------------+
As the volume donor, request a volume transfer authorization code for a specific volume:
$ openstack volume transfer request create <volume> <volume> Name or ID of volume to transfer.
The volume must be in an available
state or the request will be
denied. If the transfer request is valid in the database (that is, it
has not expired or been deleted), the volume is placed in an
awaiting-transfer
state. For example:
$ openstack volume transfer request create a1cdace0-08e4-4dc7-b9dc-457e9bcfe25f
The output shows the volume transfer ID in the id
row and the
authorization key.
+------------+--------------------------------------+ | Field | Value | +------------+--------------------------------------+ | auth_key | 0a59e53630f051e2 | | created_at | 2016-11-03T11:49:40.346181 | | id | 34e29364-142b-4c7b-8d98-88f765bf176f | | name | None | | volume_id | a1cdace0-08e4-4dc7-b9dc-457e9bcfe25f | +------------+--------------------------------------+
Optionally, you can specify a name for the transfer by using the
--name transferName
parameter.
While the auth_key
property is visible in the output of
openstack volume transfer request create VOLUME_ID
, it will not be
available in subsequent openstack volume transfer request show TRANSFER_ID
command.
Send the volume transfer ID and authorization key to the new owner (for example, by email).
View pending transfers:
$ openstack volume transfer request list +--------------------------------------+--------------------------------------+------+ | ID | Volume | Name | +--------------------------------------+--------------------------------------+------+ | 6e4e9aa4-bed5-4f94-8f76-df43232f44dc | a1cdace0-08e4-4dc7-b9dc-457e9bcfe25f | None | +--------------------------------------+--------------------------------------+------+
After the volume recipient, or new owner, accepts the transfer, you can see that the transfer is no longer available:
$ openstack volume transfer request list +----+-----------+------+ | ID | Volume ID | Name | +----+-----------+------+ +----+-----------+------+
As the volume recipient, you must first obtain the transfer ID and authorization key from the original owner.
Accept the request:
$ openstack volume transfer request accept transferID authKey
For example:
$ openstack volume transfer request accept 6e4e9aa4-bed5-4f94-8f76-df43232f44dc b2c8e585cbc68a80 +-----------+--------------------------------------+ | Property | Value | +-----------+--------------------------------------+ | id | 6e4e9aa4-bed5-4f94-8f76-df43232f44dc | | name | None | | volume_id | a1cdace0-08e4-4dc7-b9dc-457e9bcfe25f | +-----------+--------------------------------------+
If you do not have a sufficient quota for the transfer, the transfer is refused.
List available volumes and their statuses:
$ openstack volume list +-----------------+-----------------+-----------------+------+-------------+ | ID | Display Name | Status | Size | Attached to | +-----------------+-----------------+-----------------+------+-------------+ | 72bfce9f-cac... | None | error | 1 | | | a1cdace0-08e... | None |awaiting-transfer| 1 | | +-----------------+-----------------+-----------------+------+-------------+
Find the matching transfer ID:
$ openstack volume transfer request list +--------------------------------------+--------------------------------------+------+ | ID | VolumeID | Name | +--------------------------------------+--------------------------------------+------+ | a6da6888-7cdf-4291-9c08-8c1f22426b8a | a1cdace0-08e4-4dc7-b9dc-457e9bcfe25f | None | +--------------------------------------+--------------------------------------+------+
Delete the volume:
$ openstack volume transfer request delete <transfer>
Name or ID of transfer to delete.
For example:
$ openstack volume transfer request delete a6da6888-7cdf-4291-9c08-8c1f22426b8a
Verify that transfer list is now empty and that the volume is again available for transfer:
$ openstack volume transfer request list +----+-----------+------+ | ID | Volume ID | Name | +----+-----------+------+ +----+-----------+------+
$ openstack volume list +-----------------+-----------+--------------+------+-------------+----------+-------------+ | ID | Status | Display Name | Size | Volume Type | Bootable | Attached to | +-----------------+-----------+--------------+------+-------------+----------+-------------+ | 72bfce9f-ca... | error | None | 1 | None | false | | | a1cdace0-08... | available | None | 1 | None | false | | +-----------------+-----------+--------------+------+-------------+----------+-------------+
A snapshot is a point in time version of a volume. As an administrator, you can manage and unmanage snapshots.
Manage a snapshot with the openstack snapshot set
command:
$ openstack snapshot set \ [--name <name>] \ [--description <description>] \ [--property <key=value> [...] ] \ [--state <state>] \ <snapshot>
The arguments to be passed are:
--name
New snapshot name
--description
New snapshot description
--property
Property to add or modify for this snapshot (repeat option to set multiple properties)
--state
New snapshot state. (“available”, “error”, “creating”, “deleting”, or “error_deleting”) (admin only) (This option simply changes the state of the snapshot in the database with no regard to actual status, exercise caution when using)
<snapshot>
Snapshot to modify (name or ID)
$ openstack snapshot set my-snapshot-id
Unmanage a snapshot with the cinder snapshot-unmanage
command:
$ cinder snapshot-unmanage SNAPSHOT
The arguments to be passed are:
Name or ID of the snapshot to unmanage.
The following example unmanages the my-snapshot-id
image:
$ cinder snapshot-unmanage my-snapshot-id
A share is provided by file storage. You can give access to a share to
instances. To create and manage shares, you use manila
client commands.
Create a share network.
$ manila share-network-create \ --name mysharenetwork \ --description "My Manila network" \ --neutron-net-id dca0efc7-523d-43ef-9ded-af404a02b055 \ --neutron-subnet-id 29ecfbd5-a9be-467e-8b4a-3415d1f82888 +-------------------+--------------------------------------+ | Property | Value | +-------------------+--------------------------------------+ | name | mysharenetwork | | segmentation_id | None | | created_at | 2016-03-24T14:13:02.888816 | | neutron_subnet_id | 29ecfbd5-a9be-467e-8b4a-3415d1f82888 | | updated_at | None | | network_type | None | | neutron_net_id | dca0efc7-523d-43ef-9ded-af404a02b055 | | ip_version | None | | nova_net_id | None | | cidr | None | | project_id | 907004508ef4447397ce6741a8f037c1 | | id | c895fe26-92be-4152-9e6c-f2ad230efb13 | | description | My Manila network | +-------------------+--------------------------------------+
List share networks.
$ manila share-network-list +--------------------------------------+----------------+ | id | name | +--------------------------------------+----------------+ | c895fe26-92be-4152-9e6c-f2ad230efb13 | mysharenetwork | +--------------------------------------+----------------+
Create a share.
$ manila create NFS 1 \ --name myshare \ --description "My Manila share" \ --share-network mysharenetwork \ --share-type default +-----------------------------+--------------------------------------+ | Property | Value | +-----------------------------+--------------------------------------+ | status | creating | | share_type_name | default | | description | My Manila share | | availability_zone | None | | share_network_id | c895fe26-92be-4152-9e6c-f2ad230efb13 | | share_server_id | None | | host | | | access_rules_status | active | | snapshot_id | None | | is_public | False | | task_state | None | | snapshot_support | True | | id | 8d8b854b-ec32-43f1-acc0-1b2efa7c3400 | | size | 1 | | name | myshare | | share_type | bf6ada49-990a-47c3-88bc-c0cb31d5c9bf | | has_replicas | False | | replication_type | None | | created_at | 2016-03-24T14:15:34.000000 | | share_proto | NFS | | consistency_group_id | None | | source_cgsnapshot_member_id | None | | project_id | 907004508ef4447397ce6741a8f037c1 | | metadata | {} | +-----------------------------+--------------------------------------+
Show a share.
$ manila show myshare +-----------------------------+---------------------------------------------------------------+ | Property | Value | +-----------------------------+---------------------------------------------------------------+ | status | available | | share_type_name | default | | description | My Manila share | | availability_zone | nova | | share_network_id | c895fe26-92be-4152-9e6c-f2ad230efb13 | | export_locations | | | | path = 10.254.0.3:/share-e1c2d35e-fe67-4028-ad7a-45f668732b1d | | | preferred = False | | | is_admin_only = False | | | id = b6bd76ce-12a2-42a9-a30a-8a43b503867d | | | share_instance_id = e1c2d35e-fe67-4028-ad7a-45f668732b1d | | | path = 10.0.0.3:/share-e1c2d35e-fe67-4028-ad7a-45f668732b1d | | | preferred = False | | | is_admin_only = True | | | id = 6921e862-88bc-49a5-a2df-efeed9acd583 | | | share_instance_id = e1c2d35e-fe67-4028-ad7a-45f668732b1d | | share_server_id | 2e9d2d02-883f-47b5-bb98-e053b8d1e683 | | host | nosb-devstack@london#LONDON | | access_rules_status | active | | snapshot_id | None | | is_public | False | | task_state | None | | snapshot_support | True | | id | 8d8b854b-ec32-43f1-acc0-1b2efa7c3400 | | size | 1 | | name | myshare | | share_type | bf6ada49-990a-47c3-88bc-c0cb31d5c9bf | | has_replicas | False | | replication_type | None | | created_at | 2016-03-24T14:15:34.000000 | | share_proto | NFS | | consistency_group_id | None | | source_cgsnapshot_member_id | None | | project_id | 907004508ef4447397ce6741a8f037c1 | | metadata | {} | +-----------------------------+---------------------------------------------------------------+
List shares.
$ manila list +--------------------------------------+---------+------+-------------+-----------+-----------+-----------------+-----------------------------+-------------------+ | ID | Name | Size | Share Proto | Status | Is Public | Share Type Name | Host | Availability Zone | +--------------------------------------+---------+------+-------------+-----------+-----------+-----------------+-----------------------------+-------------------+ | 8d8b854b-ec32-43f1-acc0-1b2efa7c3400 | myshare | 1 | NFS | available | False | default | nosb-devstack@london#LONDON | nova | +--------------------------------------+---------+------+-------------+-----------+-----------+-----------------+-----------------------------+-------------------+
List share export locations.
$ manila share-export-location-list myshare +--------------------------------------+--------------------------------------------------------+-----------+ | ID | Path | Preferred | +--------------------------------------+--------------------------------------------------------+-----------+ | 6921e862-88bc-49a5-a2df-efeed9acd583 | 10.0.0.3:/share-e1c2d35e-fe67-4028-ad7a-45f668732b1d | False | | b6bd76ce-12a2-42a9-a30a-8a43b503867d | 10.254.0.3:/share-e1c2d35e-fe67-4028-ad7a-45f668732b1d | False | +--------------------------------------+--------------------------------------------------------+-----------+
Allow access.
$ manila access-allow myshare ip 10.0.0.0/24 +--------------+--------------------------------------+ | Property | Value | +--------------+--------------------------------------+ | share_id | 8d8b854b-ec32-43f1-acc0-1b2efa7c3400 | | access_type | ip | | access_to | 10.0.0.0/24 | | access_level | rw | | state | new | | id | 0c8470ca-0d77-490c-9e71-29e1f453bf97 | +--------------+--------------------------------------+
List access.
$ manila access-list myshare +--------------------------------------+-------------+-------------+--------------+--------+ | id | access_type | access_to | access_level | state | +--------------------------------------+-------------+-------------+--------------+--------+ | 0c8470ca-0d77-490c-9e71-29e1f453bf97 | ip | 10.0.0.0/24 | rw | active | +--------------------------------------+-------------+-------------+--------------+--------+
The access is created.
Allow access.
$ manila access-allow myshare ip 20.0.0.0/24 --access-level ro +--------------+--------------------------------------+ | Property | Value | +--------------+--------------------------------------+ | share_id | 8d8b854b-ec32-43f1-acc0-1b2efa7c3400 | | access_type | ip | | access_to | 20.0.0.0/24 | | access_level | ro | | state | new | | id | f151ad17-654d-40ce-ba5d-98a5df67aadc | +--------------+--------------------------------------+
List access.
$ manila access-list myshare +--------------------------------------+-------------+-------------+--------------+--------+ | id | access_type | access_to | access_level | state | +--------------------------------------+-------------+-------------+--------------+--------+ | 0c8470ca-0d77-490c-9e71-29e1f453bf97 | ip | 10.0.0.0/24 | rw | active | | f151ad17-654d-40ce-ba5d-98a5df67aadc | ip | 20.0.0.0/24 | ro | active | +--------------------------------------+-------------+-------------+--------------+--------+
The access is created.
Deny access.
$ manila access-deny myshare 0c8470ca-0d77-490c-9e71-29e1f453bf97 $ manila access-deny myshare f151ad17-654d-40ce-ba5d-98a5df67aadc
List access.
$ manila access-list myshare +----+-------------+-----------+--------------+-------+ | id | access type | access to | access level | state | +----+-------------+-----------+--------------+-------+ +----+-------------+-----------+--------------+-------+
The access is removed.
Create a snapshot.
$ manila snapshot-create --name mysnapshot --description "My Manila snapshot" myshare +-------------------+--------------------------------------+ | Property | Value | +-------------------+--------------------------------------+ | status | creating | | share_id | 8d8b854b-ec32-43f1-acc0-1b2efa7c3400 | | description | My Manila snapshot | | created_at | 2016-03-24T14:39:58.232844 | | share_proto | NFS | | provider_location | None | | id | e744ca47-0931-4e81-9d9f-2ead7d7c1640 | | size | 1 | | share_size | 1 | | name | mysnapshot | +-------------------+--------------------------------------+
List snapshots.
$ manila snapshot-list +--------------------------------------+--------------------------------------+-----------+------------+------------+ | ID | Share ID | Status | Name | Share Size | +--------------------------------------+--------------------------------------+-----------+------------+------------+ | e744ca47-0931-4e81-9d9f-2ead7d7c1640 | 8d8b854b-ec32-43f1-acc0-1b2efa7c3400 | available | mysnapshot | 1 | +--------------------------------------+--------------------------------------+-----------+------------+------------+
Create a share from a snapshot.
$ manila create NFS 1 \ --snapshot-id e744ca47-0931-4e81-9d9f-2ead7d7c1640 \ --share-network mysharenetwork \ --name mysharefromsnap +-----------------------------+--------------------------------------+ | Property | Value | +-----------------------------+--------------------------------------+ | status | creating | | share_type_name | default | | description | None | | availability_zone | nova | | share_network_id | c895fe26-92be-4152-9e6c-f2ad230efb13 | | share_server_id | None | | host | nosb-devstack@london#LONDON | | access_rules_status | active | | snapshot_id | e744ca47-0931-4e81-9d9f-2ead7d7c1640 | | is_public | False | | task_state | None | | snapshot_support | True | | id | e73ebcd3-4764-44f0-9b42-fab5cf34a58b | | size | 1 | | name | mysharefromsnap | | share_type | bf6ada49-990a-47c3-88bc-c0cb31d5c9bf | | has_replicas | False | | replication_type | None | | created_at | 2016-03-24T14:41:36.000000 | | share_proto | NFS | | consistency_group_id | None | | source_cgsnapshot_member_id | None | | project_id | 907004508ef4447397ce6741a8f037c1 | | metadata | {} | +-----------------------------+--------------------------------------+
List shares.
$ manila list +--------------------------------------+-----------------+------+-------------+-----------+-----------+-----------------+-----------------------------+-------------------+ | ID | Name | Size | Share Proto | Status | Is Public | Share Type Name | Host | Availability Zone | +--------------------------------------+-----------------+------+-------------+-----------+-----------+-----------------+-----------------------------+-------------------+ | 8d8b854b-ec32-43f1-acc0-1b2efa7c3400 | myshare | 1 | NFS | available | False | default | nosb-devstack@london#LONDON | nova | | e73ebcd3-4764-44f0-9b42-fab5cf34a58b | mysharefromsnap | 1 | NFS | available | False | default | nosb-devstack@london#LONDON | nova | +--------------------------------------+-----------------+------+-------------+-----------+-----------+-----------------+-----------------------------+-------------------+
Show the share created from snapshot.
$ manila show mysharefromsnap +-----------------------------+---------------------------------------------------------------+ | Property | Value | +-----------------------------+---------------------------------------------------------------+ | status | available | | share_type_name | default | | description | None | | availability_zone | nova | | share_network_id | c895fe26-92be-4152-9e6c-f2ad230efb13 | | export_locations | | | | path = 10.254.0.3:/share-4c00cb49-51d9-478e-abc1-d1853efaf6d3 | | | preferred = False | | | is_admin_only = False | | | id = 5419fb40-04b9-4a52-b08e-19aa1ce13a5c | | | share_instance_id = 4c00cb49-51d9-478e-abc1-d1853efaf6d3 | | | path = 10.0.0.3:/share-4c00cb49-51d9-478e-abc1-d1853efaf6d3 | | | preferred = False | | | is_admin_only = True | | | id = 26f55e4c-6edc-4e55-8c55-c62b7db1aa9f | | | share_instance_id = 4c00cb49-51d9-478e-abc1-d1853efaf6d3 | | share_server_id | 2e9d2d02-883f-47b5-bb98-e053b8d1e683 | | host | nosb-devstack@london#LONDON | | access_rules_status | active | | snapshot_id | e744ca47-0931-4e81-9d9f-2ead7d7c1640 | | is_public | False | | task_state | None | | snapshot_support | True | | id | e73ebcd3-4764-44f0-9b42-fab5cf34a58b | | size | 1 | | name | mysharefromsnap | | share_type | bf6ada49-990a-47c3-88bc-c0cb31d5c9bf | | has_replicas | False | | replication_type | None | | created_at | 2016-03-24T14:41:36.000000 | | share_proto | NFS | | consistency_group_id | None | | source_cgsnapshot_member_id | None | | project_id | 907004508ef4447397ce6741a8f037c1 | | metadata | {} | +-----------------------------+---------------------------------------------------------------+
Delete a share.
$ manila delete mysharefromsnap
List shares.
$ manila list +--------------------------------------+-----------------+------+-------------+-----------+-----------+-----------------+-----------------------------+-------------------+ | ID | Name | Size | Share Proto | Status | Is Public | Share Type Name | Host | Availability Zone | +--------------------------------------+-----------------+------+-------------+-----------+-----------+-----------------+-----------------------------+-------------------+ | 8d8b854b-ec32-43f1-acc0-1b2efa7c3400 | myshare | 1 | NFS | available | False | default | nosb-devstack@london#LONDON | nova | | e73ebcd3-4764-44f0-9b42-fab5cf34a58b | mysharefromsnap | 1 | NFS | deleting | False | default | nosb-devstack@london#LONDON | nova | +--------------------------------------+-----------------+------+-------------+-----------+-----------+-----------------+-----------------------------+-------------------+
The share is being deleted.
List snapshots before deleting.
$ manila snapshot-list +--------------------------------------+--------------------------------------+-----------+------------+------------+ | ID | Share ID | Status | Name | Share Size | +--------------------------------------+--------------------------------------+-----------+------------+------------+ | e744ca47-0931-4e81-9d9f-2ead7d7c1640 | 8d8b854b-ec32-43f1-acc0-1b2efa7c3400 | available | mysnapshot | 1 | +--------------------------------------+--------------------------------------+-----------+------------+------------+
Delete a snapshot.
$ manila snapshot-delete mysnapshot
List snapshots after deleting.
$ manila snapshot-list +----+----------+--------+------+------------+ | ID | Share ID | Status | Name | Share Size | +----+----------+--------+------+------------+ +----+----------+--------+------+------------+
The snapshot is deleted.
Extend share.
$ manila extend myshare 2
Show the share while it is being extended.
$ manila show myshare +-----------------------------+---------------------------------------------------------------+ | Property | Value | +-----------------------------+---------------------------------------------------------------+ | status | extending | | share_type_name | default | | description | My Manila share | | availability_zone | nova | | share_network_id | c895fe26-92be-4152-9e6c-f2ad230efb13 | | export_locations | | | | path = 10.254.0.3:/share-e1c2d35e-fe67-4028-ad7a-45f668732b1d | | | preferred = False | | | is_admin_only = False | | | id = b6bd76ce-12a2-42a9-a30a-8a43b503867d | | | share_instance_id = e1c2d35e-fe67-4028-ad7a-45f668732b1d | | | path = 10.0.0.3:/share-e1c2d35e-fe67-4028-ad7a-45f668732b1d | | | preferred = False | | | is_admin_only = True | | | id = 6921e862-88bc-49a5-a2df-efeed9acd583 | | | share_instance_id = e1c2d35e-fe67-4028-ad7a-45f668732b1d | | share_server_id | 2e9d2d02-883f-47b5-bb98-e053b8d1e683 | | host | nosb-devstack@london#LONDON | | access_rules_status | active | | snapshot_id | None | | is_public | False | | task_state | None | | snapshot_support | True | | id | 8d8b854b-ec32-43f1-acc0-1b2efa7c3400 | | size | 1 | | name | myshare | | share_type | bf6ada49-990a-47c3-88bc-c0cb31d5c9bf | | has_replicas | False | | replication_type | None | | created_at | 2016-03-24T14:15:34.000000 | | share_proto | NFS | | consistency_group_id | None | | source_cgsnapshot_member_id | None | | project_id | 907004508ef4447397ce6741a8f037c1 | | metadata | {} | +-----------------------------+---------------------------------------------------------------+
Show the share after it is extended.
$ manila show myshare +-----------------------------+---------------------------------------------------------------+ | Property | Value | +-----------------------------+---------------------------------------------------------------+ | status | available | | share_type_name | default | | description | My Manila share | | availability_zone | nova | | share_network_id | c895fe26-92be-4152-9e6c-f2ad230efb13 | | export_locations | | | | path = 10.254.0.3:/share-e1c2d35e-fe67-4028-ad7a-45f668732b1d | | | preferred = False | | | is_admin_only = False | | | id = b6bd76ce-12a2-42a9-a30a-8a43b503867d | | | share_instance_id = e1c2d35e-fe67-4028-ad7a-45f668732b1d | | | path = 10.0.0.3:/share-e1c2d35e-fe67-4028-ad7a-45f668732b1d | | | preferred = False | | | is_admin_only = True | | | id = 6921e862-88bc-49a5-a2df-efeed9acd583 | | | share_instance_id = e1c2d35e-fe67-4028-ad7a-45f668732b1d | | share_server_id | 2e9d2d02-883f-47b5-bb98-e053b8d1e683 | | host | nosb-devstack@london#LONDON | | access_rules_status | active | | snapshot_id | None | | is_public | False | | task_state | None | | snapshot_support | True | | id | 8d8b854b-ec32-43f1-acc0-1b2efa7c3400 | | size | 2 | | name | myshare | | share_type | bf6ada49-990a-47c3-88bc-c0cb31d5c9bf | | has_replicas | False | | replication_type | None | | created_at | 2016-03-24T14:15:34.000000 | | share_proto | NFS | | consistency_group_id | None | | source_cgsnapshot_member_id | None | | project_id | 907004508ef4447397ce6741a8f037c1 | | metadata | {} | +-----------------------------+---------------------------------------------------------------+
Shrink a share.
$ manila shrink myshare 1
Show the share while it is being shrunk.
$ manila show myshare +-----------------------------+---------------------------------------------------------------+ | Property | Value | +-----------------------------+---------------------------------------------------------------+ | status | shrinking | | share_type_name | default | | description | My Manila share | | availability_zone | nova | | share_network_id | c895fe26-92be-4152-9e6c-f2ad230efb13 | | export_locations | | | | path = 10.254.0.3:/share-e1c2d35e-fe67-4028-ad7a-45f668732b1d | | | preferred = False | | | is_admin_only = False | | | id = b6bd76ce-12a2-42a9-a30a-8a43b503867d | | | share_instance_id = e1c2d35e-fe67-4028-ad7a-45f668732b1d | | | path = 10.0.0.3:/share-e1c2d35e-fe67-4028-ad7a-45f668732b1d | | | preferred = False | | | is_admin_only = True | | | id = 6921e862-88bc-49a5-a2df-efeed9acd583 | | | share_instance_id = e1c2d35e-fe67-4028-ad7a-45f668732b1d | | share_server_id | 2e9d2d02-883f-47b5-bb98-e053b8d1e683 | | host | nosb-devstack@london#LONDON | | access_rules_status | active | | snapshot_id | None | | is_public | False | | task_state | None | | snapshot_support | True | | id | 8d8b854b-ec32-43f1-acc0-1b2efa7c3400 | | size | 2 | | name | myshare | | share_type | bf6ada49-990a-47c3-88bc-c0cb31d5c9bf | | has_replicas | False | | replication_type | None | | created_at | 2016-03-24T14:15:34.000000 | | share_proto | NFS | | consistency_group_id | None | | source_cgsnapshot_member_id | None | | project_id | 907004508ef4447397ce6741a8f037c1 | | metadata | {} | +-----------------------------+---------------------------------------------------------------+
Show the share after it is being shrunk.
$ manila show myshare +-----------------------------+---------------------------------------------------------------+ | Property | Value | +-----------------------------+---------------------------------------------------------------+ | status | available | | share_type_name | default | | description | My Manila share | | availability_zone | nova | | share_network_id | c895fe26-92be-4152-9e6c-f2ad230efb13 | | export_locations | | | | path = 10.254.0.3:/share-e1c2d35e-fe67-4028-ad7a-45f668732b1d | | | preferred = False | | | is_admin_only = False | | | id = b6bd76ce-12a2-42a9-a30a-8a43b503867d | | | share_instance_id = e1c2d35e-fe67-4028-ad7a-45f668732b1d | | | path = 10.0.0.3:/share-e1c2d35e-fe67-4028-ad7a-45f668732b1d | | | preferred = False | | | is_admin_only = True | | | id = 6921e862-88bc-49a5-a2df-efeed9acd583 | | | share_instance_id = e1c2d35e-fe67-4028-ad7a-45f668732b1d | | share_server_id | 2e9d2d02-883f-47b5-bb98-e053b8d1e683 | | host | nosb-devstack@london#LONDON | | access_rules_status | active | | snapshot_id | None | | is_public | False | | task_state | None | | snapshot_support | True | | id | 8d8b854b-ec32-43f1-acc0-1b2efa7c3400 | | size | 1 | | name | myshare | | share_type | bf6ada49-990a-47c3-88bc-c0cb31d5c9bf | | has_replicas | False | | replication_type | None | | created_at | 2016-03-24T14:15:34.000000 | | share_proto | NFS | | consistency_group_id | None | | source_cgsnapshot_member_id | None | | project_id | 907004508ef4447397ce6741a8f037c1 | | metadata | {} | +-----------------------------+---------------------------------------------------------------+
When you launch a virtual machine, you can inject a key pair, which
provides SSH access to your instance. For this to work, the image must
contain the cloud-init
package.
You can create at least one key pair for each project. You can use the key pair for multiple instances that belong to that project. If you generate a key pair with an external tool, you can import it into OpenStack.
A key pair belongs to an individual user, not to a project. To share a key pair across multiple users, each user needs to import that key pair.
If an image uses a static root password or a static key set (neither is recommended), you must not provide a key pair when you launch the instance.
A security group is a named collection of network access rules that are use to limit the types of traffic that have access to instances. When you launch an instance, you can assign one or more security groups to it. If you do not create security groups, new instances are automatically assigned to the default security group, unless you explicitly specify a different security group.
The associated rules in each security group control the traffic to instances in the group. Any incoming traffic that is not matched by a rule is denied access by default. You can add rules to or remove rules from a security group, and you can modify rules for the default and any other security group.
You can modify the rules in a security group to allow access to instances through different ports and protocols. For example, you can modify rules to allow access to instances through SSH, to ping instances, or to allow UDP traffic; for example, for a DNS server running on an instance. You specify the following parameters for rules:
Source of traffic. Enable traffic to instances from either IP addresses inside the cloud from other group members or from all IP addresses.
Protocol. Choose TCP for SSH, ICMP for pings, or UDP.
Destination port on virtual machine. Define a port range. To open a single port only, enter the same value twice. ICMP does not support ports; instead, you enter values to define the codes and types of ICMP traffic to be allowed.
Rules are automatically enforced as soon as you create or modify them.
Instances that use the default security group cannot, by default, be accessed from any IP address outside of the cloud. If you want those IP addresses to access the instances, you must modify the rules for the default security group. Additionally, security groups will automatically drop DHCP responses coming from instances.
You can also assign a floating IP address to a running instance to make it accessible from outside the cloud. See .
You can generate a key pair or upload an existing public key.
To generate a key pair, run the following command.
$ openstack keypair create KEY_NAME > MY_KEY.pem
This command generates a key pair with the name that you specify for
KEY_NAME, writes the private key to the .pem
file that you specify,
and registers the public key to the Nova database.
To set the permissions of the .pem
file so that only you can read
and write to it, run the following command.
$ chmod 600 MY_KEY.pem
If you have already generated a key pair and the public key is located
at ~/.ssh/id_rsa.pub
, run the following command to upload the public
key.
$ openstack keypair create --public-key ~/.ssh/id_rsa.pub KEY_NAME
This command registers the public key at the Nova database and names the
key pair the name that you specify for KEY_NAME
.
To ensure that the key pair has been successfully imported, list key pairs as follows:
$ openstack keypair list
To list the security groups for the current project, including descriptions, enter the following command:
$ openstack security group list
To create a security group with a specified name and description, enter the following command:
$ openstack security group create SECURITY_GROUP_NAME --description GROUP_DESCRIPTION
To delete a specified group, enter the following command:
$ openstack security group delete SECURITY_GROUP_NAME
You cannot delete the default security group for a project. Also, you cannot delete a security group that is assigned to a running instance.
Modify security group rules with the openstack security group rule
commands. Before you begin, source the OpenStack RC file. For details,
see .
To list the rules for a security group, run the following command:
$ openstack security group rule list SECURITY_GROUP_NAME
To allow SSH access to the instances, choose one of the following options:
Allow access from all IP addresses, specified as IP subnet 0.0.0.0/0
in CIDR notation:
$ openstack security group rule create SECURITY_GROUP_NAME \ --protocol tcp --dst-port 22:22 --remote-ip 0.0.0.0/0
Allow access only from IP addresses from other security groups (source groups) to access the specified port:
$ openstack security group rule create SECURITY_GROUP_NAME \ --protocol tcp --dst-port 22:22 --remote-group SOURCE_GROUP_NAME
To allow pinging of the instances, choose one of the following options:
Allow pinging from all IP addresses, specified as IP subnet
0.0.0.0/0
in CIDR notation.
$ openstack security group rule create SECURITY_GROUP_NAME \ --protocol icmp --dst-port -1:-1 --remote-ip 0.0.0.0/0
This allows access to all codes and all types of ICMP traffic.
Allow only members of other security groups (source groups) to ping instances.
$ openstack security group rule create SECURITY_GROUP_NAME \ --protocol icmp --dst-port -1:-1 --remote-group SOURCE_GROUP_NAME
To allow access through a UDP port, such as allowing access to a DNS server that runs on a VM, choose one of the following options:
Allow UDP access from IP addresses, specified as IP subnet
0.0.0.0/0
in CIDR notation.
$ openstack security group rule create SECURITY_GROUP_NAME \ --protocol udp --dst-port 53:53 --remote-ip 0.0.0.0/0
Allow only IP addresses from other security groups (source groups) to access the specified port.
$ openstack security group rule create SECURITY_GROUP_NAME \ --protocol udp --dst-port 53:53 --remote-group SOURCE_GROUP_NAME
To delete a security group rule, specify the ID of the rule.
$ openstack security group rule delete RULE_ID
Instances are virtual machines that run inside the cloud.
Before you can launch an instance, gather the following parameters:
The instance source can be an image, snapshot, or block storage volume that contains an image or snapshot.
A name for your instance.
The flavor for your instance, which defines the compute, memory, and storage capacity of nova computing instances. A flavor is an available hardware configuration for a server. It defines the size of a virtual server that can be launched.
Any user data files. A user data file is a special key in the metadata service that holds a file that cloud-aware applications in the guest instance can access. For example, one application that uses user data is the cloud-init system, which is an open-source package from Ubuntu that is available on various Linux distributions and that handles early initialization of a cloud instance.
Access and security credentials, which include one or both of the following credentials:
A key pair for your instance, which are SSH credentials that
are injected into images when they are launched. For the key pair
to be successfully injected, the image must contain the
cloud-init
package. Create at least one key pair for each
project. If you already have generated a key pair with an external
tool, you can import it into OpenStack. You can use the key pair
for multiple instances that belong to that project.
A security group that defines which incoming network traffic is forwarded to instances. Security groups hold a set of firewall policies, known as security group rules.
If needed, you can assign a floating (public) IP address to a running instance.
You can also attach a block storage device, or volume, for persistent storage.
Instances that use the default security group cannot, by default, be accessed from any IP address outside of the cloud. If you want those IP addresses to access the instances, you must modify the rules for the default security group.
You can also assign a floating IP address to a running instance to make it accessible from outside the cloud. See .
After you gather the parameters that you need to launch an instance, you can launch it from an or a . You can launch an instance directly from one of the available OpenStack images or from an image that you have copied to a persistent volume. The OpenStack Image service provides a pool of images that are accessible to members of different projects.
Before you begin, source the OpenStack RC file.
List the available flavors.
$ openstack flavor list
Note the ID of the flavor that you want to use for your instance:
+-----+-----------+-------+------+-----------+-------+-----------+ | ID | Name | RAM | Disk | Ephemeral | VCPUs | Is_Public | +-----+-----------+-------+------+-----------+-------+-----------+ | 1 | m1.tiny | 512 | 1 | 0 | 1 | True | | 2 | m1.small | 2048 | 20 | 0 | 1 | True | | 3 | m1.medium | 4096 | 40 | 0 | 2 | True | | 4 | m1.large | 8192 | 80 | 0 | 4 | True | | 5 | m1.xlarge | 16384 | 160 | 0 | 8 | True | +-----+-----------+-------+------+-----------+-------+-----------+
List the available images.
$ openstack image list
Note the ID of the image from which you want to boot your instance:
+--------------------------------------+---------------------------------+--------+ | ID | Name | Status | +--------------------------------------+---------------------------------+--------+ | 397e713c-b95b-4186-ad46-6126863ea0a9 | cirros-0.3.2-x86_64-uec | active | | df430cc2-3406-4061-b635-a51c16e488ac | cirros-0.3.2-x86_64-uec-kernel | active | | 3cf852bd-2332-48f4-9ae4-7d926d50945e | cirros-0.3.2-x86_64-uec-ramdisk | active | +--------------------------------------+---------------------------------+--------+
You can also filter the image list by using grep
to find a specific
image, as follows:
$ openstack image list | grep 'kernel' | df430cc2-3406-4061-b635-a51c16e488ac | cirros-0.3.2-x86_64-uec-kernel | active |
List the available security groups.
$ openstack security group list
If you are an admin user, this command will list groups for all tenants.
Note the ID of the security group that you want to use for your instance:
+--------------------------------------+---------+------------------------+----------------------------------+ | ID | Name | Description | Project | +--------------------------------------+---------+------------------------+----------------------------------+ | b0d78827-0981-45ef-8561-93aee39bbd9f | default | Default security group | 5669caad86a04256994cdf755df4d3c1 | | ec02e79e-83e1-48a5-86ad-14ab9a8c375f | default | Default security group | 1eaaf6ede7a24e78859591444abf314a | +--------------------------------------+---------+------------------------+----------------------------------+
If you have not created any security groups, you can assign the instance to only the default security group.
You can view rules for a specified security group:
$ openstack security group rule list default
List the available key pairs, and note the key pair name that you use for SSH access.
$ openstack keypair list
You can launch an instance from various sources.
Follow the steps below to launch an instance from an image.
After you gather required parameters, run the following command to launch an instance. Specify the server name, flavor ID, and image ID.
$ openstack server create --flavor FLAVOR_ID --image IMAGE_ID --key-name KEY_NAME \ --user-data USER_DATA_FILE --security-group SEC_GROUP_NAME --property KEY=VALUE \ INSTANCE_NAME
Optionally, you can provide a key name for access control and a security
group for security. You can also include metadata key and value pairs.
For example, you can add a description for your server by providing the
--property description="My Server"
parameter.
You can pass user data in a local file at instance launch by using the
--user-data USER-DATA-FILE
parameter.
If you boot an instance with an INSTANCE_NAME greater than 63 characters,
Compute truncates it automatically when turning it into a host name to
ensure the correct work of dnsmasq. The corresponding warning is written
into the neutron-dnsmasq.log
file.
The following command launches the MyCirrosServer
instance with the
m1.small
flavor (ID of 1
), cirros-0.3.2-x86_64-uec
image (ID
of 397e713c-b95b-4186-ad46-6126863ea0a9
), default
security
group, KeyPair01
key, and a user data file called
cloudinit.file
:
$ openstack server create --flavor 1 --image 397e713c-b95b-4186-ad46-6126863ea0a9 \ --security-group default --key-name KeyPair01 --user-data cloudinit.file \ myCirrosServer
Depending on the parameters that you provide, the command returns a list of server properties.
+--------------------------------------+-----------------------------------------------+ | Field | Value | +--------------------------------------+-----------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | | | OS-EXT-SRV-ATTR:host | None | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | | OS-EXT-SRV-ATTR:instance_name | | | OS-EXT-STS:power_state | NOSTATE | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | None | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | | | adminPass | E4Ksozt4Efi8 | | config_drive | | | created | 2016-11-30T14:48:05Z | | flavor | m1.tiny | | hostId | | | id | 89015cc9-bdf1-458a-8518-fdca2b4a5785 | | image | cirros (9fef3b2d-c35d-4b61-bea8-09cc6dc41829) | | key_name | KeyPair01 | | name | myCirrosServer | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | project_id | 5669caad86a04256994cdf755df4d3c1 | | properties | | | security_groups | [{u'name': u'default'}] | | status | BUILD | | updated | 2016-11-30T14:48:05Z | | user_id | c36cec73b0e44876a4478b1e6cd749bb | | metadata | {u'KEY': u'VALUE'} | +--------------------------------------+-----------------------------------------------+
A status of BUILD
indicates that the instance has started, but is
not yet online.
A status of ACTIVE
indicates that the instance is active.
Copy the server ID value from the id
field in the output. Use the
ID to get server details or to delete your server.
Copy the administrative password value from the adminPass
field. Use the
password to log in to your server.
You can also place arbitrary local files into the instance file
system at creation time by using the --file <dst-path=src-path>
option. You can store up to five files. For example, if you have a
special authorized keys file named special_authorized_keysfile
that
you want to put on the instance rather than using the regular SSH key
injection, you can use the --file
option as shown in the following
example.
$ openstack server create --image ubuntu-cloudimage --flavor 1 vm-name \ --file /root/.ssh/authorized_keys=special_authorized_keysfile
Check if the instance is online.
$ openstack server list
The list shows the ID, name, status, and private (and if assigned, public) IP addresses for all instances in the project to which you belong:
+-------------+----------------------+--------+------------+-------------+------------------+------------+ | ID | Name | Status | Task State | Power State | Networks | Image Name | +-------------+----------------------+--------+------------+-------------+------------------+------------+ | 84c6e57d... | myCirrosServer | ACTIVE | None | Running | private=10.0.0.3 | cirros | | 8a99547e... | myInstanceFromVolume | ACTIVE | None | Running | private=10.0.0.4 | centos | +-------------+----------------------+--------+------------+-------------+------------------+------------+
If the status for the instance is ACTIVE, the instance is online.
To view the available options for the openstack server list
command, run the following command:
$ openstack help server list
If you did not provide a key pair, security groups, or rules, you can access the instance only from inside the cloud through VNC. Even pinging the instance is not possible.
You can boot instances from a volume instead of an image.
To complete these tasks, use these parameters on the
openstack server create
command:
Task |
nova boot parameter |
Information |
---|---|---|
Boot an instance from an image and attach a non-bootable volume. |
|
Section 4.10.2.2.1, “Boot instance from image and attach non-bootable volume” |
Create a volume from an image and boot an instance from that volume. |
|
Section 4.10.2.2.2, “Create volume from image and boot instance” |
Boot from an existing source image, volume, or snapshot. |
|
Section 4.10.2.2.2, “Create volume from image and boot instance” |
Attach a swap disk to an instance. |
|
Section 4.10.2.2.3, “Attach swap or ephemeral disk to an instance” |
Attach an ephemeral disk to an instance. |
|
Section 4.10.2.2.3, “Attach swap or ephemeral disk to an instance” |
Create a non-bootable volume and attach that volume to an instance that you boot from an image.
To create a non-bootable volume, do not create it from an image. The volume must be entirely empty with no partition table and no file system.
Create a non-bootable volume.
$ openstack volume create --size 8 my-volume +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2016-11-25T10:37:08.850997 | | description | None | | encrypted | False | | id | b8f7bbec-6274-4cd7-90e7-60916a5e75d4 | | migration_status | None | | multiattach | False | | name | my-volume | | properties | | | replication_status | disabled | | size | 8 | | snapshot_id | None | | source_volid | None | | status | creating | | type | None | | updated_at | None | | user_id | 0678735e449149b0a42076e12dd54e28 | +---------------------+--------------------------------------+
List volumes.
$ openstack volume list +--------------------------------------+--------------+-----------+------+-------------+ | ID | Display Name | Status | Size | Attached to | +--------------------------------------+--------------+-----------+------+-------------+ | b8f7bbec-6274-4cd7-90e7-60916a5e75d4 | my-volume | available | 8 | | +--------------------------------------+--------------+-----------+------+-------------+
Boot an instance from an image and attach the empty volume to the instance.
$ openstack server create --flavor 2 --image 98901246-af91-43d8-b5e6-a4506aa8f369 \ --block-device source=volume,id=d620d971-b160-4c4e-8652-2513d74e2080,dest=volume,shutdown=preserve \ myInstanceWithVolume +--------------------------------------+--------------------------------------------+ | Field | Value | +--------------------------------------+--------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | - | | OS-EXT-SRV-ATTR:hypervisor_hostname | - | | OS-EXT-SRV-ATTR:instance_name | instance-00000004 | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | - | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | adminPass | ZaiYeC8iucgU | | config_drive | | | created | 2014-05-09T16:34:50Z | | flavor | m1.small (2) | | hostId | | | id | 1e1797f3-1662-49ff-ae8c-a77e82ee1571 | | image | cirros-0.3.1-x86_64-uec (98901246-af91-... | | key_name | - | | metadata | {} | | name | myInstanceWithVolume | | os-extended-volumes:volumes_attached | [{"id": "d620d971-b160-4c4e-8652-2513d7... | | progress | 0 | | security_groups | default | | status | BUILD | | tenant_id | ccef9e62b1e645df98728fb2b3076f27 | | updated | 2014-05-09T16:34:51Z | | user_id | fef060ae7bfd4024b3edb97dff59017a | +--------------------------------------+--------------------------------------------+
You can create a volume from an existing image, volume, or snapshot. This procedure shows you how to create a volume from an image, and use the volume to boot an instance.
List the available images.
$ openstack image list +-----------------+---------------------------------+--------+ | ID | Name | Status | +-----------------+---------------------------------+--------+ | 484e05af-a14... | Fedora-x86_64-20-20131211.1-sda | active | | 98901246-af9... | cirros-0.3.1-x86_64-uec | active | | b6e95589-7eb... | cirros-0.3.1-x86_64-uec-kernel | active | | c90893ea-e73... | cirros-0.3.1-x86_64-uec-ramdisk | active | +-----------------+---------------------------------+--------+
Note the ID of the image that you want to use to create a volume.
If you want to create a volume to a specific storage backend, you need to use an image which has cinder_img_volume_type property. In this case, a new volume will be created as storage_backend1 volume type.
$ openstack image show 98901246-af9... +------------------+------------------------------------------------------+ | Field | Value | +------------------+------------------------------------------------------+ | checksum | ee1eca47dc88f4879d8a229cc70a07c6 | | container_format | bare | | created_at | 2016-10-08T14:59:05Z | | disk_format | qcow2 | | file | /v2/images/9fef3b2d-c35d-4b61-bea8-09cc6dc41829/file | | id | 98901246-af9d-4b61-bea8-09cc6dc41829 | | min_disk | 0 | | min_ram | 0 | | name | cirros-0.3.4-x86_64-uec | | owner | 8d8ef3cdf2b54c25831cbb409ad9ae86 | | protected | False | | schema | /v2/schemas/image | | size | 13287936 | | status | active | | tags | | | updated_at | 2016-10-19T09:12:52Z | | virtual_size | None | | visibility | public | +------------------+------------------------------------------------------+
List the available flavors.
$ openstack flavor list +-----+-----------+-------+------+-----------+-------+-----------+ | ID | Name | RAM | Disk | Ephemeral | VCPUs | Is_Public | +-----+-----------+-------+------+-----------+-------+-----------+ | 1 | m1.tiny | 512 | 1 | 0 | 1 | True | | 2 | m1.small | 2048 | 20 | 0 | 1 | True | | 3 | m1.medium | 4096 | 40 | 0 | 2 | True | | 4 | m1.large | 8192 | 80 | 0 | 4 | True | | 5 | m1.xlarge | 16384 | 160 | 0 | 8 | True | +-----+-----------+-------+------+-----------+-------+-----------+
Note the ID of the flavor that you want to use to create a volume.
To create a bootable volume from an image and launch an instance from
this volume, use the --block-device
parameter.
For example:
$ openstack server create --flavor FLAVOR --block-device \ source=SOURCE,id=ID,dest=DEST,size=SIZE,shutdown=PRESERVE,bootindex=INDEX \ NAME
The parameters are:
--flavor
FLAVOR. The flavor ID or name.
--block-device
source=SOURCE,id=ID,dest=DEST,size=SIZE,shutdown=PRESERVE,bootindex=INDEX
The type of object used to create the block device. Valid values
are volume
, snapshot
, image
, and blank
.
The ID of the source object.
The type of the target virtual device. Valid values are volume
and local
.
The size of the volume that is created.
What to do with the volume when the instance is deleted.
preserve
does not delete the volume. remove
deletes the
volume.
Orders the boot disks. Use 0
to boot from this volume.
NAME
. The name for the server.
Create a bootable volume from an image. Cinder makes a volume bootable
when --image
parameter is passed.
$ openstack volume create --image IMAGE_ID --size SIZE_IN_GB bootable_volume
Create a VM from previously created bootable volume. The volume is not deleted when the instance is terminated.
$ openstack server create --flavor 2 --volume VOLUME_ID \ --block-device source=volume,id=$VOLUME_ID,dest=volume,size=10,shutdown=preserve,bootindex=0 \ myInstanceFromVolume +--------------------------------------+--------------------------------+ | Field | Value | +--------------------------------------+--------------------------------+ | OS-EXT-STS:task_state | scheduling | | image | Attempt to boot from volume | | | - no image supplied | | OS-EXT-STS:vm_state | building | | OS-EXT-SRV-ATTR:instance_name | instance-00000003 | | OS-SRV-USG:launched_at | None | | flavor | m1.small | | id | 2e65c854-dba9-4f68-8f08-fe3... | | security_groups | [{u'name': u'default'}] | | user_id | 352b37f5c89144d4ad053413926... | | OS-DCF:diskConfig | MANUAL | | accessIPv4 | | | accessIPv6 | | | progress | 0 | | OS-EXT-STS:power_state | 0 | | OS-EXT-AZ:availability_zone | nova | | config_drive | | | status | BUILD | | updated | 2014-02-02T13:29:54Z | | hostId | | | OS-EXT-SRV-ATTR:host | None | | OS-SRV-USG:terminated_at | None | | key_name | None | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | | name | myInstanceFromVolume | | adminPass | TzjqyGsRcJo9 | | tenant_id | f7ac731cc11f40efbc03a9f9e1d... | | created | 2014-02-02T13:29:53Z | | os-extended-volumes:volumes_attached | [{"id": "2fff50ab..."}] | | metadata | {} | +--------------------------------------+--------------------------------+
List volumes to see the bootable volume and its attached
myInstanceFromVolume
instance.
$ openstack volume list +---------------------+-----------------+--------+------+---------------------------------+ | ID | Display Name | Status | Size | Attached to | +---------------------+-----------------+--------+------+---------------------------------+ | c612f739-8592-44c4- | bootable_volume | in-use | 10 | Attached to myInstanceFromVolume| | b7d4-0fee2fe1da0c | | | | on /dev/vda | +---------------------+-----------------+--------+------+---------------------------------+
Use the nova boot
--swap
parameter to attach a swap disk on boot
or the nova boot
--ephemeral
parameter to attach an ephemeral
disk on boot. When you terminate the instance, both disks are deleted.
Boot an instance with a 512 MB swap disk and 2 GB ephemeral disk.
$ openstack server create --flavor FLAVOR --image IMAGE_ID --swap 512 \ --ephemeral size=2 NAME
The flavor defines the maximum swap and ephemeral disk size. You cannot exceed these maximum values.
OpenStack supports booting instances using ISO images. But before you
make such instances functional, use the openstack server create
command with the following parameters to boot an instance.
$ openstack server create \ --image ubuntu-14.04.2-server-amd64.iso \ --block-device source=blank,dest=volume,size=10,shutdown=preserve \ --nic net-id = NETWORK_UUID \ --flavor 2 INSTANCE_NAME +--------------------------------------+--------------------------------------------+ | Field | Value | +--------------------------------------+--------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | - | | OS-EXT-SRV-ATTR:hypervisor_hostname | - | | OS-EXT-SRV-ATTR:instance_name | instance-00000004 | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | - | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | adminPass | ZaiYeC8iucgU | | config_drive | | | created | 2015-06-01T16:34:50Z | | flavor | m1.small (2) | | hostId | | | id | 1e1797f3-1662-49ff-ae8c-a77e82ee1571 | | image | ubuntu-14.04.2-server-amd64.iso | | key_name | - | | metadata | {} | | name | INSTANCE_NAME | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | security_groups | default | | status | BUILD | | tenant_id | ccef9e62b1e645df98728fb2b3076f27 | | updated | 2014-05-09T16:34:51Z | | user_id | fef060ae7bfd4024b3edb97dff59017a | +--------------------------------------+--------------------------------------------+
In this command, ubuntu-14.04.2-server-amd64.iso
is the ISO image,
and INSTANCE_NAME
is the name of the new instance. NETWORK_UUID
is a valid network id in your system.
You need the Block Storage service, and the parameter
shutdown=preserve
is also mandatory, thus the volume will be
preserved after the shutdown of the instance.
After the instance is successfully launched, connect to the instance using a remote console and follow the instructions to install the system as using ISO images on regular computers. When the installation is finished and system is rebooted, the instance asks you again to install the operating system, which means your instance is not usable. If you have problems with image creation, please check the Virtual Machine Image Guide for reference.
Now complete the following steps to make your instances created using ISO image actually functional.
Delete the instance using the following command.
$ openstack server delete INSTANCE_NAME
After you delete the instance, the system you have just installed
using your ISO image remains, because the parameter
shutdown=preserve
was set, so run the following command.
$ openstack volume list +--------------------------+-------------------------+-----------+------+-------------+ | ID | Display Name | Status | Size | Attached to | +--------------------------+-------------------------+-----------+------+-------------+ | 8edd7c97-1276-47a5-9563- |dc01d873-d0f1-40b6-bfcc- | available | 10 | | | 1025f4264e4f | 26a8d955a1d9-blank-vol | | | | +--------------------------+-------------------------+-----------+------+-------------+
You get a list with all the volumes in your system. In this list, you can find the volume that is attached to your ISO created instance, with the false bootable property.
Upload the volume to glance.
$ openstack image create --volume SOURCE_VOLUME IMAGE_NAME $ openstack image list +-------------------+------------+--------+ | ID | Name | Status | +-------------------+------------+--------+ | 74303284-f802-... | IMAGE_NAME | active | +-------------------+------------+--------+
The VOLUME_UUID
is the uuid of the volume that is attached to
your ISO created instance, and the IMAGE_NAME
is the name that
you give to your new image.
After the image is successfully uploaded, you can use the new image to boot instances.
The instances launched using this image contain the system that you have just installed using the ISO image.
Instances are virtual machines that run inside the cloud on physical compute nodes. The Compute service manages instances. A host is the node on which a group of instances resides.
This section describes how to perform the different tasks involved in instance management, such as adding floating IP addresses, stopping and starting instances, and terminating instances. This section also discusses node management tasks.
Each instance has a private, fixed IP address and can also have a public, or floating IP address. Private IP addresses are used for communication between instances, and public addresses are used for communication with networks outside the cloud, including the Internet.
When you launch an instance, it is automatically assigned a private IP address that stays the same until you explicitly terminate the instance. Rebooting an instance has no effect on the private IP address.
A pool of floating IP addresses, configured by the cloud administrator, is available in OpenStack Compute. The project quota defines the maximum number of floating IP addresses that you can allocate to the project. After you allocate a floating IP address to a project, you can:
Associate the floating IP address with an instance of the project. Only one floating IP address can be allocated to an instance at any given time.
Disassociate a floating IP address from an instance in the project.
Delete a floating IP from the project which automatically deletes that IP's associations.
Use the openstack
commands to manage floating IP addresses.
To list all pools that provide floating IP addresses, run:
$ openstack floating ip pool list +--------+ | name | +--------+ | public | | test | +--------+
If this list is empty, the cloud administrator must configure a pool of floating IP addresses.
To list all floating IP addresses that are allocated to the current project, run:
$ openstack floating ip list +--------------------------------------+---------------------+------------------+------+ | ID | Floating IP Address | Fixed IP Address | Port | +--------------------------------------+---------------------+------------------+------+ | 760963b2-779c-4a49-a50d-f073c1ca5b9e | 172.24.4.228 | None | None | | 89532684-13e1-4af3-bd79-f434c9920cc3 | 172.24.4.235 | None | None | | ea3ebc6d-a146-47cd-aaa8-35f06e1e8c3d | 172.24.4.229 | None | None | +--------------------------------------+---------------------+------------------+------+
For each floating IP address that is allocated to the current project, the command outputs the floating IP address, the ID for the instance to which the floating IP address is assigned, the associated fixed IP address, and the pool from which the floating IP address was allocated.
You can assign a floating IP address to a project and to an instance.
Run the following command to allocate a floating IP address to the current project. By default, the floating IP address is allocated from the public pool. The command outputs the allocated IP address:
$ openstack floating ip create public +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | created_at | 2016-11-30T15:02:05Z | | description | | | fixed_ip_address | None | | floating_ip_address | 172.24.4.236 | | floating_network_id | 0bf90de6-fc0f-4dba-b80d-96670dfb331a | | headers | | | id | c70ad74b-2f64-4e60-965e-f24fc12b3194 | | port_id | None | | project_id | 5669caad86a04256994cdf755df4d3c1 | | project_id | 5669caad86a04256994cdf755df4d3c1 | | revision_number | 1 | | router_id | None | | status | DOWN | | updated_at | 2016-11-30T15:02:05Z | +---------------------+--------------------------------------+
List all project instances with which a floating IP address could be associated.
$ openstack server list +---------------------+------+---------+------------+-------------+------------------+------------+ | ID | Name | Status | Task State | Power State | Networks | Image Name | +---------------------+------+---------+------------+-------------+------------------+------------+ | d5c854f9-d3e5-4f... | VM1 | ACTIVE | - | Running | private=10.0.0.3 | cirros | | 42290b01-0968-43... | VM2 | SHUTOFF | - | Shutdown | private=10.0.0.4 | centos | +---------------------+------+---------+------------+-------------+------------------+------------+
Associate an IP address with an instance in the project, as follows:
$ openstack server add floating ip INSTANCE_NAME_OR_ID FLOATING_IP_ADDRESS
For example:
$ openstack server add floating ip VM1 172.24.4.225
The instance is now associated with two IP addresses:
$ openstack server list +------------------+------+--------+------------+-------------+-------------------------------+------------+ | ID | Name | Status | Task State | Power State | Networks | Image Name | +------------------+------+--------+------------+-------------+-------------------------------+------------+ | d5c854f9-d3e5... | VM1 | ACTIVE | - | Running | private=10.0.0.3, 172.24.4.225| cirros | | 42290b01-0968... | VM2 | SHUTOFF| - | Shutdown | private=10.0.0.4 | centos | +------------------+------+--------+------------+-------------+-------------------------------+------------+
After you associate the IP address and configure security group rules for the instance, the instance is publicly available at the floating IP address.
If an instance is connected to multiple networks, you can associate a
floating IP address with a specific fixed IP address using the optional
--fixed-address
parameter:
$ openstack server add floating ip --fixed-address FIXED_IP_ADDRESS \ INSTANCE_NAME_OR_ID FLOATING_IP_ADDRESS
To disassociate a floating IP address from an instance:
$ openstack server remove floating ip INSTANCE_NAME_OR_ID FLOATING_IP_ADDRESS
To remove the floating IP address from a project:
$ openstack floating ip delete FLOATING_IP_ADDRESS
The IP address is returned to the pool of IP addresses that is available for all projects. If the IP address is still associated with a running instance, it is automatically disassociated from that instance.
Change the size of a server by changing its flavor.
Show information about your server, including its size, which is shown as the value of the flavor property:
$ openstack server show myCirrosServer +--------------------------------------+----------------------------------------------------------+ | Field | Value | +--------------------------------------+----------------------------------------------------------+ | OS-DCF:diskConfig | AUTO | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | node-7.domain.tld | | OS-EXT-SRV-ATTR:hypervisor_hostname | node-7.domain.tld | | OS-EXT-SRV-ATTR:instance_name | instance-000000f3 | | OS-EXT-STS:power_state | 1 | | OS-EXT-STS:task_state | None | | OS-EXT-STS:vm_state | active | | OS-SRV-USG:launched_at | 2016-10-26T01:13:15.000000 | | OS-SRV-USG:terminated_at | None | | accessIPv4 | | | accessIPv6 | | | addresses | admin_internal_net=192.168.111.139 | | config_drive | True | | created | 2016-10-26T01:12:38Z | | flavor | m1.small (2) | | hostId | d815539ce1a8fad3d597c3438c13f1229d3a2ed66d1a75447845a2f3 | | id | 67bc9a9a-5928-47c4-852c-3631fef2a7e8 | | image | cirros-test (dc5ec4b8-5851-4be8-98aa-df7a9b8f538f) | | key_name | None | | name | myCirrosServer | | os-extended-volumes:volumes_attached | [] | | progress | 0 | | project_id | c08367f25666480f9860c6a0122dfcc4 | | properties | | | security_groups | [{u'name': u'default'}] | | status | ACTIVE | | updated | 2016-10-26T01:13:00Z | | user_id | 0209430e30924bf9b5d8869990234e44 | +--------------------------------------+----------------------------------------------------------+
The size (flavor) of the server is m1.small (2)
.
List the available flavors with the following command:
$ openstack flavor list +-----+-----------+-------+------+-----------+-------+-----------+ | ID | Name | RAM | Disk | Ephemeral | VCPUs | Is_Public | +-----+-----------+-------+------+-----------+-------+-----------+ | 1 | m1.tiny | 512 | 1 | 0 | 1 | True | | 2 | m1.small | 2048 | 20 | 0 | 1 | True | | 3 | m1.medium | 4096 | 40 | 0 | 2 | True | | 4 | m1.large | 8192 | 80 | 0 | 4 | True | | 5 | m1.xlarge | 16384 | 160 | 0 | 8 | True | +-----+-----------+-------+------+-----------+-------+-----------+
To resize the server, use the openstack server resize
command and
add the server ID or name and the new flavor. For example:
$ openstack server resize --flavor 4 myCirrosServer
By default, the openstack server resize
command gives the guest operating
system a chance to perform a controlled shutdown before the instance
is powered off and the instance is resized.
The shutdown behavior is configured by the
shutdown_timeout
parameter that can be set in the
nova.conf
file. Its value stands for the overall
period (in seconds) a guest operation system is allowed
to complete the shutdown. The default timeout is 60 seconds.
See Description of Compute configuration options
for details.
The timeout value can be overridden on a per image basis
by means of os_shutdown_timeout
that is an image metadata
setting allowing different types of operating systems to specify
how much time they need to shut down cleanly.
Show the status for your server.
$ openstack server list +----------------------+----------------+--------+-----------------------------------------+ | ID | Name | Status | Networks | +----------------------+----------------+--------+-----------------------------------------+ | 67bc9a9a-5928-47c... | myCirrosServer | RESIZE | admin_internal_net=192.168.111.139 | +----------------------+----------------+--------+-----------------------------------------+
When the resize completes, the status becomes VERIFY_RESIZE.
Confirm the resize,for example:
$ openstack server resize --confirm 67bc9a9a-5928-47c4-852c-3631fef2a7e8
The server status becomes ACTIVE.
If the resize fails or does not work as expected, you can revert the resize. For example:
$ openstack server resize --revert 67bc9a9a-5928-47c4-852c-3631fef2a7e8
The server status becomes ACTIVE.
Use one of the following methods to stop and start an instance.
To pause an instance, run the following command:
$ openstack server pause INSTANCE_NAME
This command stores the state of the VM in RAM. A paused instance continues to run in a frozen state.
To unpause an instance, run the following command:
$ openstack server unpause INSTANCE_NAME
To initiate a hypervisor-level suspend operation, run the following command:
$ openstack server suspend INSTANCE_NAME
To resume a suspended instance, run the following command:
$ openstack server resume INSTANCE_NAME
Shelving is useful if you have an instance that you are not using, but would like retain in your list of servers. For example, you can stop an instance at the end of a work week, and resume work again at the start of the next week. All associated data and resources are kept; however, anything still in memory is not retained. If a shelved instance is no longer needed, it can also be entirely removed.
You can run the following shelving tasks:
Shelve an instance - Shuts down the instance, and stores it together with associated data and resources (a snapshot is taken if not volume backed). Anything in memory is lost.
$ openstack server shelve SERVERNAME
By default, the openstack server shelve
command gives the guest
operating system a chance to perform a controlled shutdown before the
instance is powered off. The shutdown behavior is configured by the
shutdown_timeout
parameter that can be set in the
nova.conf
file. Its value stands for the overall
period (in seconds) a guest operation system is allowed
to complete the shutdown. The default timeout is 60 seconds.
See Description of Compute configuration options
for details.
The timeout value can be overridden on a per image basis
by means of os_shutdown_timeout
that is an image metadata
setting allowing different types of operating systems to specify
how much time they need to shut down cleanly.
Unshelve an instance - Restores the instance.
$ openstack server unshelve SERVERNAME
Remove a shelved instance - Removes the instance from the server; data and resource associations are deleted. If an instance is no longer needed, you can move the instance off the hypervisor in order to minimize resource usage.
$ nova shelve-offload SERVERNAME
You can search for an instance using the IP address parameter, --ip
,
with the openstack server list
command.
$ openstack server list --ip IP_ADDRESS
The following example shows the results of a search on 10.0.0.4
.
$ openstack server list --ip 10.0.0.4 +------------------+----------------------+--------+------------+-------------+------------------+------------+ | ID | Name | Status | Task State | Power State | Networks | Image Name | +------------------+----------------------+--------+------------+-------------+------------------+------------+ | 8a99547e-7385... | myInstanceFromVolume | ACTIVE | None | Running | private=10.0.0.4 | cirros | +------------------+----------------------+--------+------------+-------------+------------------+------------+
You can soft or hard reboot a running instance. A soft reboot attempts a graceful shut down and restart of the instance. A hard reboot power cycles the instance.
By default, when you reboot an instance, it is a soft reboot.
$ openstack server reboot SERVER
To perform a hard reboot, pass the --hard
parameter, as follows:
$ openstack server reboot --hard SERVER
It is also possible to reboot a running instance into rescue mode. For example, this operation may be required, if a filesystem of an instance becomes corrupted with prolonged use.
Pause, suspend, and stop operations are not allowed when an instance is running in rescue mode, as triggering these actions causes the loss of the original instance state, and makes it impossible to unrescue the instance.
Rescue mode provides a mechanism for access, even if an image renders the instance inaccessible. By default, it starts an instance from the initial image attaching the current boot disk as a secondary one.
To perform an instance reboot into rescue mode, run the following command:
$ openstack server rescue SERVER
On running the nova rescue
command,
an instance performs a soft shutdown first. This means that
the guest operating system has a chance to perform
a controlled shutdown before the instance is powered off.
The shutdown behavior is configured by the shutdown_timeout
parameter that can be set in the nova.conf
file.
Its value stands for the overall period (in seconds)
a guest operation system is allowed to complete the shutdown.
The default timeout is 60 seconds. See Description of
Compute configuration options
for details.
The timeout value can be overridden on a per image basis
by means of os_shutdown_timeout
that is an image metadata
setting allowing different types of operating systems to specify
how much time they need to shut down cleanly.
To restart the instance from the normal boot disk, run the following command:
$ openstack server unrescue SERVER
If you want to rescue an instance with a specific image, rather than the
default one, use the --rescue_image_ref
parameter:
$ nova rescue --rescue_image_ref IMAGE_ID SERVER
When you no longer need an instance, you can delete it.
List all instances:
$ openstack server list +-------------+----------------------+--------+------------+-------------+------------------+------------+ | ID | Name | Status | Task State | Power State | Networks | Image Name | +-------------+----------------------+--------+------------+-------------+------------------+------------+ | 84c6e57d... | myCirrosServer | ACTIVE | None | Running | private=10.0.0.3 | cirros | | 8a99547e... | myInstanceFromVolume | ACTIVE | None | Running | private=10.0.0.4 | ubuntu | | d7efd3e4... | newServer | ERROR | None | NOSTATE | | centos | +-------------+----------------------+--------+------------+-------------+------------------+------------+
Run the openstack server delete
command to delete the instance.
The following example shows deletion of the newServer
instance, which
is in ERROR
state:
$ openstack server delete newServer
The command does not notify that your server was deleted.
To verify that the server was deleted, run the
openstack server list
command:
$ openstack server list +-------------+----------------------+--------+------------+-------------+------------------+------------+ | ID | Name | Status | Task State | Power State | Networks | Image Name | +-------------+----------------------+--------+------------+-------------+------------------+------------+ | 84c6e57d... | myCirrosServer | ACTIVE | None | Running | private=10.0.0.3 | cirros | | 8a99547e... | myInstanceFromVolume | ACTIVE | None | Running | private=10.0.0.4 | ubuntu | +-------------+----------------------+--------+------------+-------------+------------------+------------+
The deleted instance does not appear in the list.
VNC or SPICE is used to view the console output of an instance, regardless of whether or not the console log has output. This allows relaying keyboard and mouse activity to and from an instance.
There are three remote console access methods commonly used with OpenStack:
An in-browser VNC client implemented using HTML5 Canvas and WebSockets
A complete in-browser client solution for interaction with virtualized instances
Example:
To access an instance through a remote console, run the following command:
$ openstack console url show INSTANCE_NAME --novnc
The command returns a URL from which you can access your instance:
+--------+------------------------------------------------------------------------------+ | Type | Url | +--------+------------------------------------------------------------------------------+ | novnc | http://192.168.5.96:6081/console?token=c83ae3a3-15c4-4890-8d45-aefb494a8d6c | +--------+------------------------------------------------------------------------------+
When using SPICE to view the console of an instance, a browser plugin
can be used directly on the instance page, or the
openstack console url show
command can be used with it, as well, by
returning a token-authenticated address, as in the example above.
For further information and comparisons (including security considerations), see the Security Guide.
The bare-metal driver for OpenStack Compute manages provisioning of physical hardware by using common cloud APIs and tools such as Orchestration (Heat). The use case for this driver is for single project clouds such as a high-performance computing cluster, or for deploying OpenStack itself.
If you use the bare-metal driver, you must create a network interface and add it to a bare-metal node. Then, you can launch an instance from a bare-metal image.
You can list and delete bare-metal nodes. When you delete a node, any associated network interfaces are removed. You can list and remove network interfaces that are associated with a bare-metal node.
The following commands can be used to manage bare-metal nodes.
baremetal-interface-add
Adds a network interface to a bare-metal node.
baremetal-interface-list
Lists network interfaces associated with a bare-metal node.
baremetal-interface-remove
Removes a network interface from a bare-metal node.
baremetal-node-create
Creates a bare-metal node.
baremetal-node-delete
Removes a bare-metal node and any associated interfaces.
baremetal-node-list
Lists available bare-metal nodes.
baremetal-node-show
Shows information about a bare-metal node.
When you create a bare-metal node, your PM address, user name, and password should match the information in your hardware's BIOS/IPMI configuration.
$ nova baremetal-node-create --pm_address PM_ADDRESS --pm_user PM_USERNAME \ --pm_password PM_PASSWORD $(hostname -f) 1 512 10 aa:bb:cc:dd:ee:ff
The following example shows the command and results from creating a node
with the PM address 1.2.3.4
, the PM user name ipmi, and password
ipmi
.
$ nova baremetal-node-create --pm_address 1.2.3.4 --pm_user ipmi \ --pm_password ipmi $(hostname -f) 1 512 10 aa:bb:cc:dd:ee:ff +------------------+-------------------+ | Property | Value | +------------------+-------------------+ | instance_uuid | None | | pm_address | 1.2.3.4 | | interfaces | [] | | prov_vlan_id | None | | cpus | 1 | | memory_mb | 512 | | prov_mac_address | aa:bb:cc:dd:ee:ff | | service_host | ubuntu | | local_gb | 10 | | id | 1 | | pm_user | ipmi | | terminal_port | None | +------------------+-------------------+
For each NIC on the node, you must create an interface, specifying the interface's MAC address.
$ nova baremetal-interface-add 1 aa:bb:cc:dd:ee:ff +-------------+-------------------+ | Property | Value | +-------------+-------------------+ | datapath_id | 0 | | id | 1 | | port_no | 0 | | address | aa:bb:cc:dd:ee:ff | +-------------+-------------------+
A bare-metal instance is an instance created directly on a physical machine, without any virtualization layer running underneath it. Nova retains power control via IPMI. In some situations, Nova may retain network control via Neutron and OpenFlow.
$ openstack server create --image my-baremetal-image --flavor \ my-baremetal-flavor test +-----------------------------+--------------------------------------+ | Property | Value | +-----------------------------+--------------------------------------+ | status | BUILD | | id | cc302a8f-cd81-484b-89a8-b75eb3911b1b | +-----------------------------+--------------------------------------+ ... wait for instance to become active ...
Set the --availability-zone
parameter to specify which zone or
node to use to start the server. Separate the zone from the host
name with a comma. For example:
$ openstack server create --availability-zone zone:HOST,NODE
host
is optional for the --availability-zone
parameter. You
can simply specify zone:,node
, still including the comma.
Use the nova baremetal-node-list
command to view all bare-metal
nodes and interfaces. When a node is in use, its status includes the
UUID of the instance that runs on it:
$ nova baremetal-node-list +----+--------+------+-----------+---------+-------------------+------+------------+-------------+-------------+---------------+ | ID | Host | CPUs | Memory_MB | Disk_GB | MAC Address | VLAN | PM Address | PM Username | PM Password | Terminal Port | +----+--------+------+-----------+---------+-------------------+------+------------+-------------+-------------+---------------+ | 1 | ubuntu | 1 | 512 | 10 | aa:bb:cc:dd:ee:ff | None | 1.2.3.4 | ipmi | | None | +----+--------+------+-----------+---------+-------------------+------+------------+-------------+-------------+---------------+
Use the nova baremetal-node-show
command to view the details for a
bare-metal node:
$ nova baremetal-node-show 1 +------------------+--------------------------------------+ | Property | Value | +------------------+--------------------------------------+ | instance_uuid | cc302a8f-cd81-484b-89a8-b75eb3911b1b | | pm_address | 1.2.3.4 | | interfaces | [{u'datapath_id': u'0', u'id': 1, | | | u'port_no': 0, | | | u'address': u'aa:bb:cc:dd:ee:ff'}] | | prov_vlan_id | None | | cpus | 1 | | memory_mb | 512 | | prov_mac_address | aa:bb:cc:dd:ee:ff | | service_host | ubuntu | | local_gb | 10 | | id | 1 | | pm_user | ipmi | | terminal_port | None | +------------------+--------------------------------------+
A user data file is a special key in the metadata service that holds a file that cloud-aware applications in the guest instance can access. For example, one application that uses user data is the cloud-init system, which is an open-source package from Ubuntu that is available on various Linux distributions and which handles early initialization of a cloud instance.
You can place user data in a local file and pass it through the
--user-data <user-data-file>
parameter at instance creation.
$ openstack server create --image ubuntu-cloudimage --flavor 1 \ --user-data mydata.file VM_INSTANCE
To use snapshots to migrate instances from OpenStack projects to clouds, complete these steps.
In the source project:
In the destination project:
Some cloud providers allow only administrators to perform this task.
Shut down the source VM before you take the snapshot to ensure that all data is flushed to disk. If necessary, list the instances to view the instance name:
$ openstack server list +--------------------------------------+------------+--------+------------------------------+------------+ | ID | Name | Status | Networks | Image Name | +--------------------------------------+------------+--------+------------------------------+------------+ | c41f3074-c82a-4837-8673-fa7e9fea7e11 | myInstance | ACTIVE | private=10.0.0.3 | cirros | +--------------------------------------+------------+--------+------------------------------+------------+
Use the openstack server stop
command to shut down the instance:
$ openstack server stop myInstance
Use the openstack server list
command to confirm that the
instance shows a SHUTOFF
status:
$ openstack server list +--------------------------------------+------------+---------+------------------+------------+ | ID | Name | Status | Networks | Image Name | +--------------------------------------+------------+---------+------------------+------------+ | c41f3074-c82a-4837-8673-fa7e9fea7e11 | myInstance | SHUTOFF | private=10.0.0.3 | cirros | +--------------------------------------+------------+---------+------------------+------------+
Use the nova image-create
command to take a snapshot:
$ nova image-create --poll myInstance myInstanceSnapshot Instance snapshotting... 50% complete
Use the openstack image list
command to check the status
until the status is active
:
$ openstack image list +--------------------------------------+---------------------------------+--------+ | ID | Name | Status | +--------------------------------------+---------------------------------+--------+ | 657ebb01-6fae-47dc-986a-e49c4dd8c433 | cirros-0.3.2-x86_64-uec | active | | 72074c6d-bf52-4a56-a61c-02a17bf3819b | cirros-0.3.2-x86_64-uec-kernel | active | | 3c5e5f06-637b-413e-90f6-ca7ed015ec9e | cirros-0.3.2-x86_64-uec-ramdisk | active | | f30b204e-1ce6-40e7-b8d9-b353d4d84e7d | myInstanceSnapshot | active | +--------------------------------------+---------------------------------+--------+
Get the image ID:
$ openstack image list +-------------------+-------------------+--------+ | ID | Name | Status | +-------------------+-------------------+--------+ | f30b204e-1ce6... | myInstanceSnapshot| active | +-------------------+-------------------+--------+
Download the snapshot by using the image ID that was returned in the previous step:
$ glance image-download --file snapshot.raw f30b204e-1ce6-40e7-b8d9-b353d4d84e7d
The glance image-download
command requires the image ID and
cannot use the image name.
Check there is sufficient space on the destination file system for
the image file.
Make the image available to the new environment, either through HTTP or
direct upload to a machine (scp
).
In the new project or cloud environment, import the snapshot:
$ glance --os-image-api-version 1 image-create \ --container-format bare --disk-format qcow2 --copy-from IMAGE_URL
In the new project or cloud environment, use the snapshot to create the new instance:
$ openstack server create --flavor m1.tiny --image myInstanceSnapshot myNewInstance
You can configure OpenStack to write metadata to a special configuration drive that attaches to the instance when it boots. The instance can mount this drive and read files from it to get information that is normally available through the metadata service. This metadata is different from the user data.
One use case for using the configuration drive is to pass a networking configuration when you do not use DHCP to assign IP addresses to instances. For example, you might pass the IP address configuration for the instance through the configuration drive, which the instance can mount and access before you configure the network settings for the instance.
Any modern guest operating system that is capable of mounting an ISO 9660 or VFAT file system can use the configuration drive.
To use the configuration drive, you must follow the following requirements for the compute host and image.
Compute host requirements
The following hypervisors support the configuration drive: libvirt, XenServer, Hyper-V, and VMware.
Also, the Bare Metal service supports the configuration drive.
To use configuration drive with libvirt, XenServer, or VMware, you must first install the genisoimage package on each compute host. Otherwise, instances do not boot properly.
Use the mkisofs_cmd
flag to set the path where you install the
genisoimage program. If genisoimage is in same path as the
nova-compute
service, you do not need to set this flag.
To use configuration drive with Hyper-V, you must set the
mkisofs_cmd
value to the full path to an mkisofs.exe
installation. Additionally, you must set the qemu_img_cmd
value
in the hyperv
configuration section to the full path to an
qemu-img
command installation.
To use configuration drive with the Bare Metal service, you do not need to prepare anything because the Bare Metal service treats the configuration drive properly.
Image requirements
An image built with a recent version of the cloud-init package can automatically access metadata passed through the configuration drive. The cloud-init package version 0.7.1 works with Ubuntu, Fedora based images (such as Red Hat Enterprise Linux) and openSUSE based images (such as SUSE Linux Enterprise Server).
If an image does not have the cloud-init package installed, you must customize the image to run a script that mounts the configuration drive on boot, reads the data from the drive, and takes appropriate action such as adding the public key to an account. You can read more details about how data is organized on the configuration drive.
If you use Xen with a configuration drive, use the
xenapi_disable_agent
configuration parameter to disable the
agent.
Guidelines
Do not rely on the presence of the EC2 metadata in the configuration
drive, because this content might be removed in a future release. For
example, do not rely on files in the ec2
directory.
When you create images that access configuration drive data and
multiple directories are under the openstack
directory, always
select the highest API version by date that your consumer supports.
For example, if your guest image supports the 2012-03-05, 2012-08-05,
and 2013-04-13 versions, try 2013-04-13 first and fall back to a
previous version if 2013-04-13 is not present.
To enable the configuration drive, pass the --config-drive true
parameter to the openstack server create
command.
The following example enables the configuration drive and passes user data, two files, and two key/value metadata pairs, all of which are accessible from the configuration drive:
$ openstack server create --config-drive true --image my-image-name \ --flavor 1 --key-name mykey --user-data ./my-user-data.txt \ --file /etc/network/interfaces=/home/myuser/instance-interfaces \ --file known_hosts=/home/myuser/.ssh/known_hosts \ --property role=webservers --property essential=false MYINSTANCE
You can also configure the Compute service to always create a
configuration drive by setting the following option in the
/etc/nova/nova.conf
file:
force_config_drive = true
If a user passes the --config-drive true
flag to the nova
boot
command, an administrator cannot disable the configuration
drive.
If your guest operating system supports accessing disk by label, you
can mount the configuration drive as the
/dev/disk/by-label/configurationDriveVolumeLabel
device. In the
following example, the configuration drive has the config-2
volume label:
# mkdir -p /mnt/config # mount /dev/disk/by-label/config-2 /mnt/config
Ensure that you use at least version 0.3.1 of CirrOS for configuration drive support.
If your guest operating system does not use udev
, the
/dev/disk/by-label
directory is not present.
You can use the blkid
command to identify the block device that
corresponds to the configuration drive. For example, when you boot
the CirrOS image with the m1.tiny
flavor, the device is
/dev/vdb
:
# blkid -t LABEL="config-2" -odevice
/dev/vdb
Once identified, you can mount the device:
# mkdir -p /mnt/config # mount /dev/vdb /mnt/config
In this example, the contents of the configuration drive are as follows:
ec2/2009-04-04/meta-data.json ec2/2009-04-04/user-data ec2/latest/meta-data.json ec2/latest/user-data openstack/2012-08-10/meta_data.json openstack/2012-08-10/user_data openstack/content openstack/content/0000 openstack/content/0001 openstack/latest/meta_data.json openstack/latest/user_data
The files that appear on the configuration drive depend on the arguments
that you pass to the openstack server create
command.
The following example shows the contents of the
openstack/2012-08-10/meta_data.json
and
openstack/latest/meta_data.json
files. These files are identical.
The file contents are formatted for readability.
{
"availability_zone": "nova",
"files": [
{
"content_path": "/content/0000",
"path": "/etc/network/interfaces"
},
{
"content_path": "/content/0001",
"path": "known_hosts"
}
],
"hostname": "test.novalocal",
"launch_index": 0,
"name": "test",
"meta": {
"role": "webservers",
"essential": "false"
},
"public_keys": {
"mykey": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDBqUfVvCSez0/Wfpd8dLLgZXV9GtXQ7hnMN+Z0OWQUyebVEHey1CXuin0uY1cAJMhUq8j98SiW+cU0sU4J3x5l2+xi1bodDm1BtFWVeLIOQINpfV1n8fKjHB+ynPpe1F6tMDvrFGUlJs44t30BrujMXBe8Rq44cCk6wqyjATA3rQ== Generated by Nova\n"
},
"uuid": "83679162-1378-4288-a2d4-70e13ec132aa"
}
Note the effect of the
--file /etc/network/interfaces=/home/myuser/instance-interfaces
argument that was passed to the openstack server create
command.
The contents of this file are contained in the openstack/content/0000
file on the configuration drive, and the path is specified as
/etc/network/interfaces
in the meta_data.json
file.
The following example shows the contents of the
ec2/2009-04-04/meta-data.json
and the ec2/latest/meta-data.json
files. These files are identical. The file contents are formatted to
improve readability.
{
"ami-id": "ami-00000001",
"ami-launch-index": 0,
"ami-manifest-path": "FIXME",
"block-device-mapping": {
"ami": "sda1",
"ephemeral0": "sda2",
"root": "/dev/sda1",
"swap": "sda3"
},
"hostname": "test.novalocal",
"instance-action": "none",
"instance-id": "i-00000001",
"instance-type": "m1.tiny",
"kernel-id": "aki-00000002",
"local-hostname": "test.novalocal",
"local-ipv4": null,
"placement": {
"availability-zone": "nova"
},
"public-hostname": "test.novalocal",
"public-ipv4": "",
"public-keys": {
"0": {
"openssh-key": "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAAAgQDBqUfVvCSez0/Wfpd8dLLgZXV9GtXQ7hnMN+Z0OWQUyebVEHey1CXuin0uY1cAJMhUq8j98SiW+cU0sU4J3x5l2+xi1bodDm1BtFWVeLIOQINpfV1n8fKjHB+ynPpe1F6tMDvrFGUlJs44t30BrujMXBe8Rq44cCk6wqyjATA3rQ== Generated by Nova\n"
}
},
"ramdisk-id": "ari-00000003",
"reservation-id": "r-7lfps8wj",
"security-groups": [
"default"
]
}
The openstack/2012-08-10/user_data
, openstack/latest/user_data
,
ec2/2009-04-04/user-data
, and ec2/latest/user-data
file are
present only if the --user-data
flag and the contents of the user
data file are passed to the openstack server create
command.
The default format of the configuration drive as an ISO 9660 file
system. To explicitly specify the ISO 9660 format, add the following
line to the /etc/nova/nova.conf
file:
config_drive_format=iso9660
By default, you cannot attach the configuration drive image as a CD
drive instead of as a disk drive. To attach a CD drive, add the
following line to the /etc/nova/nova.conf
file:
config_drive_cdrom=true
For legacy reasons, you can configure the configuration drive to use
VFAT format instead of ISO 9660. It is unlikely that you would require
VFAT format because ISO 9660 is widely supported across operating
systems. However, to use the VFAT format, add the following line to the
/etc/nova/nova.conf
file:
config_drive_format=vfat
If you choose VFAT, the configuration drive is 64 MB.
In current version (Liberty) of OpenStack Compute, live migration with
config_drive
on local disk is forbidden due to the bug in libvirt
of copying a read-only disk. However, if we use VFAT as the format of
config_drive
, the function of live migration works well.
Before you run commands, set environment variables using the OpenStack RC file.
List the extensions of the system:
$ openstack extension list -c Alias -c Name --network +------------------------------------------+---------------------------+ | Name | Alias | +------------------------------------------+---------------------------+ | Default Subnetpools | default-subnetpools | | Network IP Availability | network-ip-availability | | Auto Allocated Topology Services | auto-allocated-topology | | Neutron L3 Configurable external gateway | ext-gw-mode | | Address scope | address-scope | | Neutron Extra Route | extraroute | +------------------------------------------+---------------------------+
Create a network:
$ openstack network create net1 Created a new network: +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | UP | | availability_zone_hints | | | availability_zones | | | created_at | 2016-12-21T08:32:54Z | | description | | | headers | | | id | 180620e3-9eae-4ba7-9739-c5847966e1f0 | | ipv4_address_scope | None | | ipv6_address_scope | None | | mtu | 1450 | | name | net1 | | port_security_enabled | True | | project_id | c961a8f6d3654657885226378ade8220 | | provider:network_type | vxlan | | provider:physical_network | None | | provider:segmentation_id | 14 | | revision_number | 3 | | router:external | Internal | | shared | False | | status | ACTIVE | | subnets | | | tags | [] | | updated_at | 2016-12-21T08:32:54Z | +---------------------------+--------------------------------------+
Some fields of the created network are invisible to non-admin users.
Create a network with specified provider network type.
$ openstack network create net2 --provider-network-type vxlan Created a new network: +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | UP | | availability_zone_hints | | | availability_zones | | | created_at | 2016-12-21T08:33:34Z | | description | | | headers | | | id | c0a563d5-ef7d-46b3-b30d-6b9d4138b6cf | | ipv4_address_scope | None | | ipv6_address_scope | None | | mtu | 1450 | | name | net2 | | port_security_enabled | True | | project_id | c961a8f6d3654657885226378ade8220 | | provider:network_type | vxlan | | provider:physical_network | None | | provider:segmentation_id | 87 | | revision_number | 3 | | router:external | Internal | | shared | False | | status | ACTIVE | | subnets | | | tags | [] | | updated_at | 2016-12-21T08:33:34Z | +---------------------------+--------------------------------------+
Create a subnet:
$ openstack subnet create subnet1 --network net1 --subnet-range 192.168.2.0/24 +-------------------+--------------------------------------+ | Field | Value | +-------------------+--------------------------------------+ | allocation_pools | 192.168.2.2-192.168.2.254 | | cidr | 192.168.2.0/24 | | created_at | 2016-12-22T18:47:52Z | | description | | | dns_nameservers | | | enable_dhcp | True | | gateway_ip | 192.168.2.1 | | headers | | | host_routes | | | id | a394689c-f547-4834-9778-3e0bb22130dc | | ip_version | 4 | | ipv6_address_mode | None | | ipv6_ra_mode | None | | name | subnet1 | | network_id | 9db55b7f-e803-4e1b-9bba-6262f60b96cb | | project_id | e17431afc0524e0690484889a04b7fa0 | | revision_number | 2 | | service_types | | | subnetpool_id | None | | updated_at | 2016-12-22T18:47:52Z | +-------------------+--------------------------------------+
The subnet-create
command has the following positional and optional
parameters:
The name or ID of the network to which the subnet belongs.
In this example, net1
is a positional argument that specifies the
network name.
The CIDR of the subnet.
In this example, 192.168.2.0/24
is a positional argument that
specifies the CIDR.
The subnet name, which is optional.
In this example, --name subnet1
specifies the name of the
subnet.
For information and examples on more advanced use of neutron's
subnet
subcommand, see the OpenStack Administrator
Guide.
Create a router:
$ openstack router create router1 +-------------------------+--------------------------------------+ | Field | Value | +-------------------------+--------------------------------------+ | admin_state_up | UP | | availability_zone_hints | | | availability_zones | | | created_at | 2016-12-22T18:48:57Z | | description | | | distributed | True | | external_gateway_info | null | | flavor_id | None | | ha | False | | headers | | | id | e25a24ee-3458-45c7-b16e-edf49092aab7 | | name | router1 | | project_id | e17431afc0524e0690484889a04b7fa0 | | revision_number | 1 | | routes | | | status | ACTIVE | | updated_at | 2016-12-22T18:48:57Z | +-------------------------+--------------------------------------+
Take note of the unique router identifier returned, this will be required in subsequent steps.
Link the router to the external provider network:
$ openstack router set ROUTER --external-gateway NETWORK
Replace ROUTER with the unique identifier of the router, replace NETWORK with the unique identifier of the external provider network.
Link the router to the subnet:
$ openstack router add subnet ROUTER SUBNET
Replace ROUTER with the unique identifier of the router, replace SUBNET with the unique identifier of the subnet.
Create a port with specified IP address:
$ openstack port create --network net1 --fixed-ip subnet=subnet1,ip-address=192.168.2.40 port1 +-----------------------+-----------------------------------------+ | Field | Value | +-----------------------+-----------------------------------------+ | admin_state_up | UP | | allowed_address_pairs | | | binding_host_id | | | binding_profile | | | binding_vif_details | | | binding_vif_type | unbound | | binding_vnic_type | normal | | created_at | 2016-12-22T18:54:43Z | | description | | | device_id | | | device_owner | | | extra_dhcp_opts | | | fixed_ips | ip_address='192.168.2.40', subnet_id='a | | | 394689c-f547-4834-9778-3e0bb22130dc' | | headers | | | id | 031ddba8-3e3f-4c3c-ae26-7776905eb24f | | mac_address | fa:16:3e:df:3d:c7 | | name | port1 | | network_id | 9db55b7f-e803-4e1b-9bba-6262f60b96cb | | port_security_enabled | True | | project_id | e17431afc0524e0690484889a04b7fa0 | | revision_number | 5 | | security_groups | 84abb9eb-dc59-40c1-802c-4e173c345b6a | | status | DOWN | | updated_at | 2016-12-22T18:54:44Z | +-----------------------+-----------------------------------------+
In the previous command, net1
is the network name, which is a
positional argument. --fixed-ip subnet<subnet>,ip-address=192.168.2.40
is
an option which specifies the port's fixed IP address we wanted.
When creating a port, you can specify any unallocated IP in the subnet even if the address is not in a pre-defined pool of allocated IP addresses (set by your cloud provider).
Create a port without specified IP address:
$ openstack port create port2 --network net1 +-----------------------+-----------------------------------------+ | Field | Value | +-----------------------+-----------------------------------------+ | admin_state_up | UP | | allowed_address_pairs | | | binding_host_id | | | binding_profile | | | binding_vif_details | | | binding_vif_type | unbound | | binding_vnic_type | normal | | created_at | 2016-12-22T18:56:06Z | | description | | | device_id | | | device_owner | | | extra_dhcp_opts | | | fixed_ips | ip_address='192.168.2.10', subnet_id='a | | | 394689c-f547-4834-9778-3e0bb22130dc' | | headers | | | id | eac47fcd-07ac-42dd-9993-5b36ac1f201b | | mac_address | fa:16:3e:96:ae:6e | | name | port2 | | network_id | 9db55b7f-e803-4e1b-9bba-6262f60b96cb | | port_security_enabled | True | | project_id | e17431afc0524e0690484889a04b7fa0 | | revision_number | 5 | | security_groups | 84abb9eb-dc59-40c1-802c-4e173c345b6a | | status | DOWN | | updated_at | 2016-12-22T18:56:06Z | +-----------------------+-----------------------------------------+
Note that the system allocates one IP address if you do not specify
an IP address in the openstack port create
command.
You can specify a MAC address with --mac-address MAC_ADDRESS
.
If you specify an invalid MAC address, including 00:00:00:00:00:00
or ff:ff:ff:ff:ff:ff
, you will get an error.
Query ports with specified fixed IP addresses:
$ neutron port-list --fixed-ips ip_address=192.168.2.2 \ ip_address=192.168.2.40 +----------------+------+-------------------+-------------------------------------------------+ | id | name | mac_address | fixed_ips | +----------------+------+-------------------+-------------------------------------------------+ | baf13412-26... | | fa:16:3e:f6:ec:c7 | {"subnet_id"... ..."ip_address": "192.168.2.2"} | | f7a08fe4-e7... | | fa:16:3e:97:e0:fc | {"subnet_id"... ..."ip_address": "192.168.2.40"}| +----------------+------+-------------------+-------------------------------------------------+
The OpenStack Object Storage service provides the swift
client,
which is a command-line interface (CLI). Use this client to list
objects and containers, upload objects to containers, and download
or delete objects from containers. You can also gather statistics and
update metadata for accounts, containers, and objects.
This client is based on the native swift client library, client.py
,
which seamlessly re-authenticates if the current token expires during
processing, retries operations multiple times, and provides a processing
concurrency of 10.
To create a container, run the following command and replace
CONTAINER
with the name of your container.
$ swift post CONTAINER
To list all containers, run the following command:
$ swift list
To check the status of containers, run the following command:
$ swift stat
Account: AUTH_7b5970fbe7724bf9b74c245e77c03bcg Containers: 2 Objects: 3 Bytes: 268826 Accept-Ranges: bytes X-Timestamp: 1392683866.17952 Content-Type: text/plain; charset=utf-8
You can also use the swift stat
command with the ACCOUNT
or
CONTAINER
names as parameters.
$ swift stat CONTAINER
Account: AUTH_7b5970fbe7724bf9b74c245e77c03bcg Container: storage1 Objects: 2 Bytes: 240221 Read ACL: Write ACL: Sync To: Sync Key: Accept-Ranges: bytes X-Timestamp: 1392683866.20180 Content-Type: text/plain; charset=utf-8
Users have roles on accounts. For example, a user with the admin role
has full access to all containers and objects in an account. You can
set access control lists (ACLs) at the container level and support
lists for read and write access, which you set with the
X-Container-Read
and X-Container-Write
headers.
To give a user read access, use the swift post
command with the
-r
parameter. To give a user write access, use the
-w
parameter.
The following are examples of read
ACLs for containers:
A request with any HTTP referer header can read container contents:
$ swift post CONTAINER -r ".r:*"
A request with any HTTP referer header can read and list container contents:
$ swift post CONTAINER -r ".r:*,.rlistings"
A list of specific HTTP referer headers permitted to read container contents:
$ swift post CONTAINER -r \ ".r:openstack.example.com,.r:swift.example.com,.r:storage.example.com"
A list of specific HTTP referer headers denied read access:
$ swift post CONTAINER -r \ ".r:*,.r:-openstack.example.com,.r:-swift.example.com,.r:-storage.example.com"
All users residing in project1 can read container contents:
$ swift post CONTAINER -r "project1:*"
User1 from project1 can read container contents:
$ swift post CONTAINER -r "project1:user1"
A list of specific users and projects permitted to read container contents:
$ swift post CONTAINER -r \ "project1:user1,project1:user2,project3:*,project4:user1"
The following are examples of write
ACLs for containers:
All users residing in project1 can write to the container:
$ swift post CONTAINER -w "project1:*"
User1 from project1 can write to the container:
$ swift post CONTAINER -w "project1:user1"
A list of specific users and projects permitted to write to the container:
$ swift post CONTAINER -w \ "project1:user1,project1:user2,project3:*,project4:user1"
To successfully write to a container, a user must have read privileges
(in addition to write) on the container. For all aforementioned
read/write ACL examples, one can replace the project/user name with
project/user UUID, i.e. <project_uuid>:<user_uuid>
. If using multiple
keystone domains, UUID format is required.
To upload an object to a container, run the following command:
$ swift upload CONTAINER OBJECT_FILENAME
To upload in chunks, for large files, run the following command:
$ swift upload -S CHUNK_SIZE CONTAINER OBJECT_FILENAME
To check the status of the object, run the following command:
$ swift stat CONTAINER OBJECT_FILENAME
Account: AUTH_7b5970fbe7724bf9b74c245e77c03bcg Container: storage1 Object: images Content Type: application/octet-stream Content Length: 211616 Last Modified: Tue, 18 Feb 2014 00:40:36 GMT ETag: 82169623d55158f70a0d720f238ec3ef Meta Orig-Filename: images.jpg Accept-Ranges: bytes X-Timestamp: 1392684036.33306
To list the objects in a container, run the following command:
$ swift list CONTAINER
To download an object from a container, run the following command:
$ swift download CONTAINER OBJECT_FILENAME
To run the cURL command examples for the Object Storage API requests, set these environment variables:
The public URL that is the HTTP endpoint from where you can access
Object Storage. It includes the Object Storage API version number
and your account name. For example,
https://23.253.72.207/v1/my_account
.
The authentication token for Object Storage.
To obtain these values, run the swift stat -v
command.
As shown in this example, the public URL appears in the StorageURL
field, and the token appears in the Auth Token
field:
StorageURL: https://23.253.72.207/v1/my_account Auth Token: {token} Account: my_account Containers: 2 Objects: 3 Bytes: 47 Meta Book: MobyDick X-Timestamp: 1389453423.35964 X-Trans-Id: txee55498935404a2caad89-0052dd3b77 Content-Type: text/plain; charset=utf-8 Accept-Ranges: bytes
You can store multiple versions of your content so that you can recover from unintended overwrites. Object versioning is an easy way to implement version control, which you can use with any type of content.
You cannot version a large-object manifest file, but the large-object manifest file can point to versioned segments.
We strongly recommend that you put non-current objects in a different container than the container where current object versions reside.
To enable object versioning, ask your cloud provider to set the
allow_versions
option to TRUE
in the container configuration
file.
Create an archive
container to store older versions of objects:
$ curl -i $publicURL/archive -X PUT -H "Content-Length: 0" -H "X-Auth-Token: $token"
HTTP/1.1 201 Created Content-Length: 0 Content-Type: text/html; charset=UTF-8 X-Trans-Id: tx46f8c29050834d88b8d7e-0052e1859d Date: Thu, 23 Jan 2014 21:11:57 GMT
Create a current
container to store current versions of objects.
Include the X-Versions-Location
header. This header defines the
container that holds the non-current versions of your objects. You
must UTF-8-encode and then URL-encode the container name before you
include it in the X-Versions-Location
header. This header enables
object versioning for all objects in the current
container.
Changes to objects in the current
container automatically create
non-current versions in the archive
container.
$ curl -i $publicURL/current -X PUT -H "Content-Length: 0" -H \ ”X-Auth-Token: $token" -H "X-Versions-Location: archive"
HTTP/1.1 201 Created Content-Length: 0 Content-Type: text/html; charset=UTF-8 X-Trans-Id: txb91810fb717347d09eec8-0052e18997 Date: Thu, 23 Jan 2014 21:28:55 GMT
Create the first version of an object in the current
container:
$ curl -i $publicURL/current/my_object --data-binary 1 -X PUT -H \ ”Content-Length: 0" -H "X-Auth-Token: $token"
HTTP/1.1 201 Created Last-Modified: Thu, 23 Jan 2014 21:31:22 GMT Content-Length: 0 Etag: d41d8cd98f00b204e9800998ecf8427e Content-Type: text/html; charset=UTF-8 X-Trans-Id: tx5992d536a4bd4fec973aa-0052e18a2a Date: Thu, 23 Jan 2014 21:31:22 GMT
Nothing is written to the non-current version container when you
initially PUT
an object in the current
container. However,
subsequent PUT
requests that edit an object trigger the creation
of a version of that object in the archive
container.
These non-current versions are named as follows:
<length><object_name><timestamp>
Where length
is the 3-character, zero-padded hexadecimal
character length of the object, <object_name>
is the object name,
and <timestamp>
is the time when the object was initially created
as a current version.
Create a second version of the object in the current
container:
$ curl -i $publicURL/current/my_object --data-binary 2 -X PUT -H \ “Content-Length: 0" -H "X-Auth-Token: $token"
HTTP/1.1 201 Created Last-Modified: Thu, 23 Jan 2014 21:41:32 GMT Content-Length: 0 Etag: d41d8cd98f00b204e9800998ecf8427e Content-Type: text/html; charset=UTF-8 X-Trans-Id: tx468287ce4fc94eada96ec-0052e18c8c Date: Thu, 23 Jan 2014 21:41:32 GMT
Issue a GET
request to a versioned object to get the current
version of the object. You do not have to do any request redirects or
metadata lookups.
List older versions of the object in the archive
container:
$ curl -i $publicURL/archive?prefix=009my_object -X GET -H \ "X-Auth-Token: $token"
HTTP/1.1 200 OK Content-Length: 30 X-Container-Object-Count: 1 Accept-Ranges: bytes X-Timestamp: 1390513280.79684 X-Container-Bytes-Used: 0 Content-Type: text/plain; charset=utf-8 X-Trans-Id: tx9a441884997542d3a5868-0052e18d8e Date: Thu, 23 Jan 2014 21:45:50 GMT 009my_object/1390512682.92052
A POST
request to a versioned object updates only the metadata
for the object and does not create a new version of the object. New
versions are created only when the content of the object changes.
Issue a DELETE
request to a versioned object to remove the
current version of the object and replace it with the next-most
current version in the non-current container.
$ curl -i $publicURL/current/my_object -X DELETE -H \ "X-Auth-Token: $token"
HTTP/1.1 204 No Content Content-Length: 0 Content-Type: text/html; charset=UTF-8 X-Trans-Id: tx006d944e02494e229b8ee-0052e18edd Date: Thu, 23 Jan 2014 21:51:25 GMT
List objects in the archive
container to show that the archived
object was moved back to the current
container:
$ curl -i $publicURL/archive?prefix=009my_object -X GET -H \ "X-Auth-Token: $token"
HTTP/1.1 204 No Content Content-Length: 0 X-Container-Object-Count: 0 Accept-Ranges: bytes X-Timestamp: 1390513280.79684 X-Container-Bytes-Used: 0 Content-Type: text/html; charset=UTF-8 X-Trans-Id: tx044f2a05f56f4997af737-0052e18eed Date: Thu, 23 Jan 2014 21:51:41 GMT
This next-most current version carries with it any metadata last set
on it. If you want to completely remove an object and you have five
versions of it, you must DELETE
it five times.
To disable object versioning for the current
container, remove
its X-Versions-Location
metadata header by sending an empty key
value.
$ curl -i $publicURL/current -X PUT -H "Content-Length: 0" -H \ "X-Auth-Token: $token" -H "X-Versions-Location: "
HTTP/1.1 202 Accepted Content-Length: 76 Content-Type: text/html; charset=UTF-8 X-Trans-Id: txe2476de217134549996d0-0052e19038 Date: Thu, 23 Jan 2014 21:57:12 GMT <html><h1>Accepted</h1><p>The request is accepted for processing.</p></html>
By default, the Object Storage API uses a text/plain
response
format. In addition, both JSON and XML data serialization response
formats are supported.
To run the cURL command examples, you must export environment variables. For more information, see the section Section 4.16.4, “Environment variables required to run examples”.
To define the response format, use one of these methods:
Method |
Description |
---|---|
format= |
Append this parameter to the URL for a |
|
Include this header in the
|
For example, this request uses the format
query parameter to ask
for a JSON response:
$ curl -i $publicURL?format=json -X GET -H "X-Auth-Token: $token"
HTTP/1.1 200 OK Content-Length: 96 X-Account-Object-Count: 1 X-Timestamp: 1389453423.35964 X-Account-Meta-Subject: Literature X-Account-Bytes-Used: 14 X-Account-Container-Count: 2 Content-Type: application/json; charset=utf-8 Accept-Ranges: bytes X-Trans-Id: tx274a77a8975c4a66aeb24-0052d95365 Date: Fri, 17 Jan 2014 15:59:33 GMT
Object Storage lists container names with additional information in JSON format:
[
{
"count":0,
"bytes":0,
"name":"janeausten"
},
{
"count":1,
"bytes":14,
"name":"marktwain"
}
]
This request uses the Accept
request header to ask for an XML
response:
$ curl -i $publicURL -X GET -H "X-Auth-Token: $token" -H \ ”Accept: application/xml; charset=utf-8"
HTTP/1.1 200 OK Content-Length: 263 X-Account-Object-Count: 3 X-Account-Meta-Book: MobyDick X-Timestamp: 1389453423.35964 X-Account-Bytes-Used: 47 X-Account-Container-Count: 2 Content-Type: application/xml; charset=utf-8 Accept-Ranges: bytes X-Trans-Id: txf0b4c9727c3e491694019-0052e03420 Date: Wed, 22 Jan 2014 21:12:00 GMT
Object Storage lists container names with additional information in XML format:
<?xml version="1.0" encoding="UTF-8"?>
<account name="AUTH_73f0aa26640f4971864919d0eb0f0880">
<container>
<name>janeausten</name>
<count>2</count>
<bytes>33</bytes>
</container>
<container>
<name>marktwain</name>
<count>1</count>
<bytes>14</bytes>
</container>
</account>
The remainder of the examples in this guide use standard, non-serialized
responses. However, all GET
requests that perform list operations
accept the format
query parameter or Accept
request header.
If you have a large number of containers or objects, you can use the
marker
, limit
, and end_marker
parameters to control
how many items are returned in a list and where the list starts or ends.
When you request a list of containers or objects, Object Storage
returns a maximum of 10,000 names for each request. To get
subsequent names, you must make another request with the
marker
parameter. Set the marker
parameter to the name of
the last item returned in the previous list. You must URL-encode the
marker
value before you send the HTTP request. Object Storage
returns a maximum of 10,000 names starting after the last item
returned.
To return fewer than 10,000 names, use the limit
parameter. If
the number of names returned equals the specified limit
(or
10,000 if you omit the limit
parameter), you can assume there
are more names to list. If the number of names in the list is
exactly divisible by the limit
value, the last request has no
content.
Limits the result set to names that are less than the
end_marker
parameter value. You must URL-encode the
end_marker
value before you send the HTTP request.
Assume the following list of container names:
apples bananas kiwis oranges pears
Use a limit
of two:
# curl -i $publicURL/?limit=2 -X GET -H "X-Auth-Token: $token"
apples bananas
Because two container names are returned, there are more names to list.
Make another request with a marker
parameter set to the name of
the last item returned:
# curl -i $publicURL/?limit=2&marker=bananas -X GET -H \ “X-Auth-Token: $token"
kiwis oranges
Again, two items are returned, and there might be more.
Make another request with a marker
of the last item returned:
# curl -i $publicURL/?limit=2&marker=oranges -X GET -H \” X-Auth-Token: $token"
pears
You receive a one-item response, which is fewer than the limit
number of names. This indicates that this is the end of the list.
Use the end_marker
parameter to limit the result set to object
names that are less than the end_marker
parameter value:
# curl -i $publicURL/?end_marker=oranges -X GET -H \” X-Auth-Token: $token"
apples bananas kiwis
You receive a result set of all container names before the
end-marker
value.
Although you cannot nest directories in OpenStack Object Storage, you
can simulate a hierarchical structure within a single container by
adding forward slash characters (/
) in the object name. To navigate
the pseudo-directory structure, you can use the delimiter
query
parameter. This example shows you how to use pseudo-hierarchical folders
and directories.
In this example, the objects reside in a container called backups
.
Within that container, the objects are organized in a pseudo-directory
called photos
. The container name is not displayed in the example,
but it is a part of the object URLs. For instance, the URL of the
picture me.jpg
is
https://storage.swiftdrive.com/v1/CF_xer7_343/backups/photos/me.jpg
.
To display a list of all the objects in the storage container, use
GET
without a delimiter
or prefix
.
$ curl -X GET -i -H "X-Auth-Token: $token" \ $publicurl/v1/AccountString/backups
The system returns status code 2xx (between 200 and 299, inclusive) and the requested list of the objects.
photos/animals/cats/persian.jpg photos/animals/cats/siamese.jpg photos/animals/dogs/corgi.jpg photos/animals/dogs/poodle.jpg photos/animals/dogs/terrier.jpg photos/me.jpg photos/plants/fern.jpg photos/plants/rose.jpg
Use the delimiter parameter to limit the displayed results. To use
delimiter
with pseudo-directories, you must use the parameter slash
(/
).
$ curl -X GET -i -H "X-Auth-Token: $token" \ $publicurl/v1/AccountString/backups?delimiter=/
The system returns status code 2xx (between 200 and 299, inclusive) and
the requested matching objects. Because you use the slash, only the
pseudo-directory photos/
displays. The returned values from a slash
delimiter
query are not real objects. The value will refer to
a real object if it does not end with a slash. The pseudo-directories
have no content-type, rather, each pseudo-directory has
its own subdir
entry in the response of JSON and XML results.
For example:
[
{
"subdir": "photos/"
}
]
[
{
"subdir": "photos/animals/"
},
{
"hash": "b249a153f8f38b51e92916bbc6ea57ad",
"last_modified": "2015-12-03T17:31:28.187370",
"bytes": 2906,
"name": "photos/me.jpg",
"content_type": "image/jpeg"
},
{
"subdir": "photos/plants/"
}
]
<?xml version="1.0" encoding="UTF-8"?>
<container name="backups">
<subdir name="photos/">
<name>photos/</name>
</subdir>
</container>
<?xml version="1.0" encoding="UTF-8"?>
<container name="backups">
<subdir name="photos/animals/">
<name>photos/animals/</name>
</subdir>
<object>
<name>photos/me.jpg</name>
<hash>b249a153f8f38b51e92916bbc6ea57ad</hash>
<bytes>2906</bytes>
<content_type>image/jpeg</content_type>
<last_modified>2015-12-03T17:31:28.187370</last_modified>
</object>
<subdir name="photos/plants/">
<name>photos/plants/</name>
</subdir>
</container>
Use the prefix
and delimiter
parameters to view the objects
inside a pseudo-directory, including further nested pseudo-directories.
$ curl -X GET -i -H "X-Auth-Token: $token" \ $publicurl/v1/AccountString/backups?prefix=photos/&delimiter=/
The system returns status code 2xx (between 200 and 299, inclusive) and the objects and pseudo-directories within the top level pseudo-directory.
photos/animals/ photos/me.jpg photos/plants/
You can create an unlimited number of nested pseudo-directories. To
navigate through them, use a longer prefix
parameter coupled with
the delimiter
parameter. In this sample output, there is a
pseudo-directory called dogs
within the pseudo-directory
animals
. To navigate directly to the files contained within
dogs
, enter the following command:
$ curl -X GET -i -H "X-Auth-Token: $token" \ $publicurl/v1/AccountString/backups?prefix=photos/animals/dogs/&delimiter=/
The system returns status code 2xx (between 200 and 299, inclusive) and the objects and pseudo-directories within the nested pseudo-directory.
photos/animals/dogs/corgi.jpg photos/animals/dogs/poodle.jpg photos/animals/dogs/terrier.jpg
Your Object Storage system might not enable all features that this document describes. These features are:
To discover which features are enabled in your Object Storage system,
use the /info
request.
To use the /info
request, send a GET
request using the /info
path to the Object Store endpoint as shown in this example:
$ curl https://storage.example.com/info
This example shows a truncated response body:
{
"swift":{
"version":"1.11.0"
},
"staticweb":{
},
"tempurl":{
}
}
This output shows that the Object Storage system has enabled the static website and temporary URL features.
In some cases, the /info
request will return an error. This could be
because your service provider has disabled the /info
request
function, or because you are using an older version that does not
support it.
To discover whether your Object Storage system supports this feature, see Section 4.16.9, “Discoverability” or check with your service provider.
By default, the content of an object cannot be greater than 5 GB. However, you can use a number of smaller objects to construct a large object. The large object is comprised of two types of objects:
Segment objects
store the object content. You can divide your content
into segments and upload each segment into its own segment object. Segment
objects do not have any special features. You create, update, download, and
delete segment objects just as you do with normal objects.
A manifest object
links the segment objects into one logical large
object. When you download a manifest object, Object Storage concatenates and
returns the contents of the segment objects in the response body. This
behavior extends to the response headers returned by GET
and HEAD
requests. The Content-Length
response header contains the total size of
all segment objects.
Object Storage takes the ETag
value of each segment, concatenates them
together, and returns the MD5 checksum of the result to calculate the
ETag
response header value. The manifest object types are:
The manifest object content is an ordered list of the names of the segment objects in JSON format. See Section 4.16.10.1, “Static large objects”.
The manifest object has no content but it has a
X-Object-Manifest
metadata header. The value of this header
is CONTAINER/PREFIX
, where CONTAINER
is the name of
the container where the segment objects are stored, and
PREFIX
is a string that all segment objects have in common.
See Section 4.16.10.2, “Dynamic large objects”.
If you use a manifest object as the source of a COPY
request, the
new object is a normal, and not a segment, object. If the total size of the
source segment objects exceeds 5 GB, the COPY
request fails. However,
you can make a duplicate of the manifest object and this new object can be
larger than 5 GB.
To create a static large object, divide your content into pieces and create (upload) a segment object to contain each piece.
You must record the ETag
response header value that the PUT
operation
returns. Alternatively, you can calculate the MD5 checksum of the segment
before you perform the upload and include this value in the ETag
request
header. This action ensures that the upload cannot corrupt your data.
List the name of each segment object along with its size and MD5 checksum in order.
Create a manifest object. Include the ?multipart-manifest=put
query string at the end of the manifest object name to indicate that
this is a manifest object.
The body of the PUT
request on the manifest object comprises a JSON
list where each element contains these attributes:
The container and object name in the format:
CONTAINER_NAME/OBJECT_NAME
.
The MD5 checksum of the content of the segment object. This value
must match the ETag
of that object.
The size of the segment object. This value must match the
Content-Length
of that object.
This example shows three segment objects. You can use several containers and the object names do not have to conform to a specific pattern, in contrast to dynamic large objects.
[
{
"path": "mycontainer/objseg1",
"etag": "0228c7926b8b642dfb29554cd1f00963",
"size_bytes": 1468006
},
{
"path": "mycontainer/pseudodir/seg-obj2",
"etag": "5bfc9ea51a00b790717eeb934fb77b9b",
"size_bytes": 1572864
},
{
"path": "other-container/seg-final",
"etag": "b9c3da507d2557c1ddc51f27c54bae51",
"size_bytes": 256
}
]
The Content-Length
request header must contain the length of the
JSON content and not the length of the segment objects. However, after the
PUT
operation completes, the Content-Length
metadata is set to
the total length of all the object segments. A similar situation applies
to the ETag
. If used in the PUT
operation, it must contain the
MD5 checksum of the JSON content. The ETag
metadata value is then
set to be the MD5 checksum of the concatenated ETag
values of the
object segments. You can also set the Content-Type
request header
and custom object metadata.
When the PUT
operation sees the ?multipart-manifest=put
query
parameter, it reads the request body and verifies that each segment
object exists and that the sizes and ETags match. If there is a
mismatch, the PUT
operation fails.
If everything matches, the API creates the manifest object and sets the
X-Static-Large-Object
metadata to true
to indicate that the manifest is
a static object manifest.
Normally when you perform a GET
operation on the manifest object, the
response body contains the concatenated content of the segment objects. To
download the manifest list, use the ?multipart-manifest=get
query
parameter. The list in the response is not formatted the same as the manifest
that you originally used in the PUT
operation.
If you use the DELETE
operation on a manifest object, the manifest
object is deleted. The segment objects are not affected. However, if you
add the ?multipart-manifest=delete
query parameter, the segment
objects are deleted and if all are successfully deleted, the manifest
object is also deleted.
To change the manifest, use a PUT
operation with the
?multipart-manifest=put
query parameter. This request creates a
manifest object. You can also update the object metadata in the usual
way.
Before you can upload objects that are larger than 5 GB, you must segment them. You upload the segment objects like you do with any other object and create a dynamic large manifest object. The manifest object tells Object Storage how to find the segment objects that comprise the large object. You can still access each segment individually, but when you retrieve the manifest object, the API concatenates the segments. You can include any number of segments in a single large object.
To ensure the download works correctly, you must upload all the object segments to the same container and prefix each object name so that the segments sort in correct concatenation order.
You also create and upload a manifest file. The manifest file is a zero-byte
file with the extra X-Object-Manifest
CONTAINER/PREFIX
header. The
CONTAINER
is the container the object segments are in and PREFIX
is
the common prefix for all the segments. You must UTF-8-encode and then
URL-encode the container and common prefix in the X-Object-Manifest
header.
It is best to upload all the segments first and then create or update the manifest. With this method, the full object is not available for downloading until the upload is complete. Also, you can upload a new set of segments to a second location and update the manifest to point to this new location. During the upload of the new segments, the original manifest is still available to download the first set of segments.
PUT /API_VERSION/ACCOUNT/CONTAINER/OBJECT HTTP/1.1 Host: storage.example.com X-Auth-Token: eaaafd18-0fed-4b3a-81b4-663c99ec1cbb ETag: 8a964ee2a5e88be344f36c22562a6486 Content-Length: 1 X-Object-Meta-PIN: 1234
No response body is returned.
The 2``nn`` response code indicates a successful write. nn
is a value from
00 to 99.
The Length Required (411)
response code indicates that the request does
not include a required Content-Length
or Content-Type
header.
The Unprocessable Entity (422)
response code indicates that the MD5
checksum of the data written to the storage system does NOT match the optional
ETag value.
You can continue to upload segments, like this example shows, before you upload the manifest.
PUT /API_VERSION/ACCOUNT/CONTAINER/OBJECT HTTP/1.1 Host: storage.example.com X-Auth-Token: eaaafd18-0fed-4b3a-81b4-663c99ec1cbb ETag: 8a964ee2a5e88be344f36c22562a6486 Content-Length: 1 X-Object-Meta-PIN: 1234
Next, upload the manifest. This manifest specifies the container where the object segments reside. Note that if you upload additional segments after you create the manifest, the concatenated object becomes that much larger but you do not need to recreate the manifest file for subsequent additional segments.
PUT /API_VERSION/ACCOUNT/CONTAINER/OBJECT HTTP/1.1 Host: storage.clouddrive.com X-Auth-Token: eaaafd18-0fed-4b3a-81b4-663c99ec1cbb Content-Length: 0 X-Object-Meta-PIN: 1234 X-Object-Manifest: CONTAINER/PREFIX
[...]
A GET
or HEAD
request on the manifest returns a Content-Type
response header value that is the same as the Content-Type
request header
value in the PUT
request that created the manifest. To change the
Content- Type
, reissue the PUT
request.
You can use the X-Trans-Id-Extra
request header to include extra
information to help you debug any errors that might occur with large object
upload and other Object Storage transactions.
The Object Storage API appends the first 32 characters of the
X-Trans-Id-Extra
request header value to the transaction ID value in the
generated X-Trans-Id
response header. You must UTF-8-encode and then
URL-encode the extra transaction information before you include it in
the X-Trans-Id-Extra
request header.
For example, you can include extra transaction information when you upload large objects such as images.
When you upload each segment and the manifest, include the same value in the
X-Trans-Id-Extra
request header. If an error occurs, you can find all
requests that are related to the large object upload in the Object Storage
logs.
You can also use X-Trans-Id-Extra
strings to help operators debug requests
that fail to receive responses. The operator can search for the extra
information in the logs.
While static and dynamic objects have similar behavior, this table describes their differences:
Description |
Static large object |
Dynamic large object |
---|---|---|
End-to-end integrity |
Assured. The list of segments includes the MD5 checksum
( |
Not guaranteed. The eventual consistency model means that
although you have uploaded a segment object, it might not
appear in the container listing until later. If you download
the manifest before it appears in the container, it does not
form part of the content returned in response to a |
Upload order |
You must upload the segment objects before upload the manifest object. |
You can upload manifest and segment objects in any order. You are recommended to upload the manifest object after the segments in case a premature download of the manifest occurs. However, this is not enforced. |
Removal or addition of segment objects |
You cannot add or remove segment objects from the manifest. However, you can create a completely new manifest object of the same name with a different manifest list. |
You can upload new segment objects or remove existing segments.
The names must simply match the |
Segment object size and number |
Segment objects must be at least 1 MB in size (by default). The final segment object can be any size. At most, 1000 segments are supported (by default). |
Segment objects can be any size. |
Segment object container name |
The manifest list includes the container name of each object. Segment objects can be in different containers. |
All segment objects must be in the same container. |
Manifest object metadata |
The object has |
The |
Copying the manifest object |
Include the |
The |
To discover whether your Object Storage system supports this feature, see Section 4.16.9, “Discoverability”. Alternatively, check with your service provider.
Use the auto-extract archive feature to upload a tar archive file.
The Object Storage system extracts files from the archive file and creates an object.
To upload an archive file, make a PUT
request. Add the
extract-archive=format
query parameter to indicate that you are
uploading a tar archive file instead of normal content.
Valid values for the format
variable are tar
, tar.gz
, or
tar.bz2
.
The path you specify in the PUT
request is used for the location of
the object and the prefix for the resulting object names.
In the PUT
request, you can specify the path for:
An account
Optionally, a specific container
Optionally, a specific object prefix
For example, if the first object in the tar archive is
/home/file1.txt
and you specify the
/v1/12345678912345/mybackup/castor/
path, the operation creates the
castor/home/file1.txt
object in the mybackup
container in the
12345678912345
account.
You must use the tar utility to create the tar archive file.
You can upload regular files but you cannot upload other items (for example, empty directories or symbolic links).
You must UTF-8-encode the member names.
The archive auto-extract feature supports these formats:
The POSIX.1-1988 Ustar format.
The GNU tar format. Includes the long name, long link, and sparse extensions.
The POSIX.1-2001 pax format.
Use gzip or bzip2 to compress the archive.
Use the extract-archive
query parameter to specify the format.
Valid values for this parameter are tar
, tar.gz
, or
tar.bz2
.
When Object Storage processes the request, it performs multiple
sub-operations. Even if all sub-operations fail, the operation returns a
201 Created
status. Some sub-operations might succeed while others
fail. Examine the response body to determine the results of each
auto-extract archive sub-operation.
You can set the Accept
request header to one of these values to
define the response format:
text/plain
Formats response as plain text. If you omit the Accept
header,
text/plain
is the default.
application/json
Formats response as JSON.
application/xml
Formats response as XML.
text/xml
Formats response as XML.
The following auto-extract archive files example shows a text/plain
response body where no failures occurred:
Number Files Created: 10 Errors:
The following auto-extract archive files example shows a text/plain
response where some failures occurred. In this example, the Object
Storage system is configured to reject certain character strings so that
the 400 Bad Request error occurs for any objects that use the restricted
strings.
Number Files Created: 8 Errors: /v1/12345678912345/mycontainer/home/xx%3Cyy, 400 Bad Request /v1/12345678912345/mycontainer/../image.gif, 400 Bad Request
The following example shows the failure response in application/json
format.
{
"Number Files Created":1,
"Errors":[
[
"/v1/12345678912345/mycontainer/home/xx%3Cyy",
"400 Bad Request"
],
[
"/v1/12345678912345/mycontainer/../image.gif",
"400 Bad Request"
]
]
}
To discover whether your Object Storage system supports this feature, see Section 4.16.9, “Discoverability”. Alternatively, check with your service provider.
With bulk delete, you can delete up to 10,000 objects or containers (configurable) in one request.
To perform a bulk delete operation, add the bulk-delete
query
parameter to the path of a POST
or DELETE
operation.
The DELETE
operation is supported for backwards compatibility.
The path is the account, such as /v1/12345678912345
, that contains
the objects and containers.
In the request body of the POST
or DELETE
operation, list the
objects or containers to be deleted. Separate each name with a newline
character. You can include a maximum of 10,000 items (configurable) in
the list.
In addition, you must:
UTF-8-encode and then URL-encode the names.
To indicate an object, specify the container and object name as:
CONTAINER_NAME
/OBJECT_NAME
.
To indicate a container, specify the container name as:
CONTAINER_NAME
. Make sure that the container is empty. If it
contains objects, Object Storage cannot delete the container.
Set the Content-Type
request header to text/plain
.
When Object Storage processes the request, it performs multiple sub-operations. Even if all sub-operations fail, the operation returns a 200 status. The bulk operation returns a response body that contains details that indicate which sub-operations have succeeded and failed. Some sub-operations might succeed while others fail. Examine the response body to determine the results of each delete sub-operation.
You can set the Accept
request header to one of the following values
to define the response format:
text/plain
Formats response as plain text. If you omit the
Accept
header, text/plain
is the default.
application/json
Formats response as JSON.
application/xml
or text/xml
Formats response as XML.
The response body contains the following information:
The number of files actually deleted.
The number of not found objects.
Errors. A list of object names and associated error statuses for the
objects that failed to delete. The format depends on the value that
you set in the Accept
header.
The following bulk delete response is in application/xml
format. In
this example, the mycontainer
container is not empty, so it cannot
be deleted.
<delete>
<number_deleted>2</number_deleted>
<number_not_found>4</number_not_found>
<errors>
<object>
<name>/v1/12345678912345/mycontainer</name>
<status>409 Conflict</status>
</object>
</errors>
</delete>
To discover whether your Object Storage system supports this feature, see Section 4.16.9, “Discoverability”. Alternatively, check with your service provider.
You can use your Object Storage account to create a static website. This
static website is created with Static Web middleware and serves container
data with a specified index file, error file resolution, and optional
file listings. This mode is normally active only for anonymous requests,
which provide no authentication token. To use it with authenticated
requests, set the header X-Web-Mode
to TRUE
on the request.
The Static Web filter must be added to the pipeline in your
/etc/swift/proxy-server.conf
file below any authentication
middleware. You must also add a Static Web middleware configuration
section.
See the Cloud Administrator Guide for an example of the static web configuration syntax.
See the Cloud Administrator Guide for a complete example of the /etc/swift/proxy-server.conf file (including static web).
Your publicly readable containers are checked for two headers,
X-Container-Meta-Web-Index
and X-Container-Meta-Web-Error
. The
X-Container-Meta-Web-Error
header is discussed below, in the
section called Section 4.16.13.1.5, “Set error pages for static website”.
Use X-Container-Meta-Web-Index
to determine the index file (or
default page served, such as index.html
) for your website. When
someone initially enters your site, the index.html
file displays
automatically. If you create sub-directories for your site by creating
pseudo-directories in your container, the index page for each
sub-directory is displayed by default. If your pseudo-directory does not
have a file with the same name as your index file, visits to the
sub-directory return a 404 error.
You also have the option of displaying a list of files in your
pseudo-directory instead of a web page. To do this, set the
X-Container-Meta-Web-Listings
header to TRUE
. You may add styles
to your file listing by setting X-Container-Meta-Web-Listings-CSS
to a style sheet (for example, lists.css
).
The following sections show how to use Static Web middleware through Object Storage.
Make the container publicly readable. Once the container is publicly readable, you can access your objects directly, but you must set the index file to browse the main site URL and its sub-directories.
$ swift post -r '.r:*,.rlistings' container
Set the index file. In this case, index.html
is the default file
displayed when the site appears.
$ swift post -m 'web-index:index.html' container
Turn on file listing. If you do not set the index file, the URL displays a list of the objects in the container. Instructions on styling the list with a CSS follow.
$ swift post -m 'web-listings: true' container
Style the file listing using a CSS.
$ swift post -m 'web-listings-css:listings.css' container
You can create and set custom error pages for visitors to your website;
currently, only 401 (Unauthorized) and 404 (Not Found) errors are
supported. To do this, set the metadata header,
X-Container-Meta-Web-Error
.
Error pages are served with the status code pre-pended to the name of
the error page you set. For instance, if you set
X-Container-Meta-Web-Error
to error.html
, 401 errors will
display the page 401error.html
. Similarly, 404 errors will display
404error.html
. You must have both of these pages created in your
container when you set the X-Container-Meta-Web-Error
metadata, or
your site will display generic error pages.
You only have to set the X-Container-Meta-Web-Error
metadata once
for your entire static website.
$ swift post -m 'web-error:error.html' container
Any 2nn
response indicates success.
The Orchestration service enables you to orchestrate multiple composite cloud applications. This service supports use of both the Amazon Web Services (AWS) CloudFormation template format through both a Query API that is compatible with CloudFormation and the native OpenStack Heat Orchestration Template (HOT) format through a REST API.
These flexible template languages enable application developers to describe and automate the deployment of infrastructure, services, and applications. The templates enable creation of most OpenStack resource types, such as instances, floating IP addresses, volumes, security groups, and users. The resources, once created, are referred to as stacks.
The template languages are described in the Template Guide in the Heat developer documentation.
To create a stack, or template, from an example template file, run the following command:
$ openstack stack create --template server_console.yaml \ --parameter "image=cirros" MYSTACK
The --parameter
values that you specify depend on the parameters
that are defined in the template. If a website hosts the template
file, you can also specify the URL with the --template
parameter.
The command returns the following output:
+---------------------+----------------------------------------------------------------+ | Field | Value | +---------------------+----------------------------------------------------------------+ | id | 70b9feca-8f99-418e-b2f1-cc38d61b3ffb | | stack_name | MYSTACK | | description | The heat template is used to demo the 'console_urls' attribute | | | of OS::Nova::Server. | | | | | creation_time | 2016-06-08T09:54:15 | | updated_time | None | | stack_status | CREATE_IN_PROGRESS | | stack_status_reason | | +---------------------+----------------------------------------------------------------+
You can also use the --dry-run
option with the
openstack stack create
command to validate a
template file without creating a stack from it.
If validation fails, the response returns an error message.
To explore the state and history of a particular stack, you can run a number of commands.
To see which stacks are visible to the current user, run the following command:
$ openstack stack list +--------------------------------------+------------+-----------------+---------------------+--------------+ | ID | Stack Name | Stack Status | Creation Time | Updated Time | +--------------------------------------+------------+-----------------+---------------------+--------------+ | 70b9feca-8f99-418e-b2f1-cc38d61b3ffb | MYSTACK | CREATE_COMPLETE | 2016-06-08T09:54:15 | None | +--------------------------------------+------------+-----------------+---------------------+--------------+
To show the details of a stack, run the following command:
$ openstack stack show MYSTACK
A stack consists of a collection of resources. To list the resources and their status, run the following command:
$ openstack stack resource list MYSTACK +---------------+--------------------------------------+------------------+-----------------+---------------------+ | resource_name | physical_resource_id | resource_type | resource_status | updated_time | +---------------+--------------------------------------+------------------+-----------------+---------------------+ | server | 1b3a7c13-42be-4999-a2a1-8fbefd00062b | OS::Nova::Server | CREATE_COMPLETE | 2016-06-08T09:54:15 | +---------------+--------------------------------------+------------------+-----------------+---------------------+
To show the details for a specific resource in a stack, run the following command:
$ openstack stack resource show MYSTACK server
Some resources have associated metadata which can change throughout the lifecycle of a resource. Show the metadata by running the following command:
$ openstack stack resource metadata MYSTACK server
A series of events is generated during the lifecycle of a stack. To display lifecycle events, run the following command:
$ openstack stack event list MYSTACK 2016-06-08 09:54:15 [MYSTACK]: CREATE_IN_PROGRESS Stack CREATE started 2016-06-08 09:54:15 [server]: CREATE_IN_PROGRESS state changed 2016-06-08 09:54:41 [server]: CREATE_COMPLETE state changed 2016-06-08 09:54:41 [MYSTACK]: CREATE_COMPLETE Stack CREATE completed successfully
To show the details for a particular event, run the following command:
$ openstack stack event show MYSTACK server EVENT
To update an existing stack from a modified template file, run a command like the following command:
$ openstack stack update --template server_console.yaml \ --parameter "image=ubuntu" MYSTACK +---------------------+----------------------------------------------------------------+ | Field | Value | +---------------------+----------------------------------------------------------------+ | id | 267a459a-a8cd-4d3e-b5a1-8c08e945764f | | stack_name | mystack | | description | The heat template is used to demo the 'console_urls' attribute | | | of OS::Nova::Server. | | | | | creation_time | 2016-06-08T09:54:15 | | updated_time | 2016-06-08T10:41:18 | | stack_status | UPDATE_IN_PROGRESS | | stack_status_reason | Stack UPDATE started | +---------------------+----------------------------------------------------------------+
Some resources are updated in-place, while others are replaced with new resources.
Telemetry measures cloud resources in OpenStack. It collects data
related to billing. Currently, this metering service is available
through only the ceilometer
command-line client.
To model data, Telemetry uses the following abstractions:
Measures a specific aspect of resource usage,
such as the existence of a running instance, or
ongoing performance, such as the CPU utilization
for an instance. Meters exist for each type of
resource. For example, a separate cpu_util
meter exists for each instance. The lifecycle
of a meter is decoupled from the existence of
its related resource. The meter persists after
the resource goes away.
A meter has the following attributes:
String name
A unit of measurement
A type, which indicates whether values increase monotonically (cumulative), are interpreted as a change from the previous value (delta), or are stand-alone and relate only to the current duration (gauge)
An individual data point that is associated with a specific meter.
A sample has the same attributes as the associated meter, with
the addition of time stamp and value attributes. The value attribute
is also known as the sample volume
.
A set of data point aggregates over a time duration. (In contrast, a sample represents a single data point.) The Telemetry service employs the following aggregation functions:
count. The number of samples in each period.
max. The maximum number of sample volumes in each period.
min. The minimum number of sample volumes in each period.
avg. The average of sample volumes over each period.
sum. The sum of sample volumes over each period.
A set of rules that define a monitor and a current state, with
edge-triggered actions associated with target states.
Alarms provide user-oriented Monitoring-as-a-Service and a
general purpose utility for OpenStack. Orchestration auto
scaling is a typical use case. Alarms follow a tristate
model of ok
, alarm
, and insufficient data
.
For conventional threshold-oriented alarms, a static
threshold value and comparison operator govern state transitions.
The comparison operator compares a selected meter statistic against
an evaluation window of configurable length into the recent past.
This example uses the openstack
client to create an auto-scaling
stack and the ceilometer
client to measure resources.
Create an auto-scaling stack by running the following command.
The -f
option specifies the name of the stack template
file, and the -P
option specifies the KeyName
parameter as heat_key
:
$ openstack stack create --template cfn/F17/AutoScalingCeilometer.yaml \ --parameter "KeyName=heat_key" mystack
List the heat resources that were created:
$ openstack stack resource list mystack +---------------+--------------------------------------+------------------+-----------------+---------------------+ | resource_name | physical_resource_id | resource_type | resource_status | updated_time | +---------------+--------------------------------------+------------------+-----------------+---------------------+ | server | 1b3a7c13-42be-4999-a2a1-8fbefd00062b | OS::Nova::Server | CREATE_COMPLETE | 2013-10-02T05:53:41Z | | ... | ... | ... | ... | ... | +---------------+--------------------------------------+------------------+-----------------+---------------------+
List the alarms that are set:
$ ceilometer alarm-list +--------------------------------------+------------------------------+-------------------+---------+------------+----------------------------------+ | Alarm ID | Name | State | Enabled | Continuous | Alarm condition | +--------------------------------------+------------------------------+-------------------+---------+------------+----------------------------------+ | 4f896b40-0859-460b-9c6a-b0d329814496 | as-CPUAlarmLow-i6qqgkf2fubs | insufficient data | True | False | cpu_util < 15.0 during 1x 60s | | 75d8ecf7-afc5-4bdc-95ff-19ed9ba22920 | as-CPUAlarmHigh-sf4muyfruy5m | insufficient data | True | False | cpu_util > 50.0 during 1x 60s | +--------------------------------------+------------------------------+-------------------+---------+------------+----------------------------------+
List the meters that are set:
$ ceilometer meter-list +-------------+------------+----------+--------------------------------------+----------------------------------+----------------------------------+ | Name | Type | Unit | Resource ID | User ID | Project ID | +-------------+------------+----------+--------------------------------------+----------------------------------+----------------------------------+ | cpu | cumulative | ns | 3965b41b-81b0-4386-bea5-6ec37c8841c1 | d1a2996d3b1f4e0e8645ba9650308011 | bf03bf32e3884d489004ac995ff7a61c | | cpu | cumulative | ns | 62520a83-73c7-4084-be54-275fe770ef2c | d1a2996d3b1f4e0e8645ba9650308011 | bf03bf32e3884d489004ac995ff7a61c | | cpu_util | gauge | % | 3965b41b-81b0-4386-bea5-6ec37c8841c1 | d1a2996d3b1f4e0e8645ba9650308011 | bf03bf32e3884d489004ac995ff7a61c | +-------------+------------+----------+--------------------------------------+----------------------------------+----------------------------------+
List samples:
$ ceilometer sample-list -m cpu_util +--------------------------------------+----------+-------+---------------+------+---------------------+ | Resource ID | Name | Type | Volume | Unit | Timestamp | +--------------------------------------+----------+-------+---------------+------+---------------------+ | 3965b41b-81b0-4386-bea5-6ec37c8841c1 | cpu_util | gauge | 3.98333333333 | % | 2013-10-02T10:50:12 | +--------------------------------------+----------+-------+---------------+------+---------------------+
View statistics:
$ ceilometer statistics -m cpu_util +--------+---------------------+---------------------+-------+---------------+---------------+---------------+---------------+----------+---------------------+---------------------+ | Period | Period Start | Period End | Count | Min | Max | Sum | Avg | Duration | Duration Start | Duration End | +--------+---------------------+---------------------+-------+---------------+---------------+---------------+---------------+----------+---------------------+---------------------+ | 0 | 2013-10-02T10:50:12 | 2013-10-02T10:50:12 | 1 | 3.98333333333 | 3.98333333333 | 3.98333333333 | 3.98333333333 | 0.0 | 2013-10-02T10:50:12 | 2013-10-02T10:50:12 | +--------+---------------------+---------------------+-------+---------------+---------------+---------------+---------------+----------+---------------------+---------------------+
The Database service provides scalable and reliable cloud provisioning functionality for both relational and non-relational database engines. Users can quickly and easily use database features without the burden of handling complex administrative tasks.
Assume that you have installed the Database service and populated your data store with images for the type and versions of databases that you want, and that you can create and access a database.
This example shows you how to create and access a MySQL 5.5 database.
Determine which flavor to use for your database
When you create a database instance, you must specify a nova flavor. The flavor indicates various characteristics of the instance, such as RAM, root volume size, and so on. The default nova flavors are not sufficient to create database instances. You might need to create or obtain some new nova flavors that work for databases.
The first step is to list flavors by using the
openstack flavor list
command.
Here are the default flavors, although you may have additional custom flavors in your environment:
$ openstack flavor list +-----+-----------+-------+------+-----------+-------+-----------+ | ID | Name | RAM | Disk | Ephemeral | VCPUs | Is_Public | +-----+-----------+-------+------+-----------+-------+-----------+ | 1 | m1.tiny | 512 | 1 | 0 | 1 | True | | 2 | m1.small | 2048 | 20 | 0 | 1 | True | | 3 | m1.medium | 4096 | 40 | 0 | 2 | True | | 4 | m1.large | 8192 | 80 | 0 | 4 | True | | 5 | m1.xlarge | 16384 | 160 | 0 | 8 | True | +-----+-----------+-------+------+-----------+-------+-----------+
Now take a look at the minimum requirements for various database instances:
Database |
RAM (MB) |
Disk (GB) |
VCPUs |
---|---|---|---|
MySQL |
512 |
5 |
1 |
Cassandra |
2048 |
5 |
1 |
MongoDB |
1024 |
5 |
1 |
Redis |
512 |
5 |
1 |
If you have a custom flavor that meets the needs of the database that you want to create, proceed to Step 2 and use that flavor.
If your environment does not have a suitable flavor, an
administrative user must create a custom flavor by using the
openstack flavor create
command.
MySQL example. This example creates a flavor that you can use with a MySQL database. This example has the following attributes:
Flavor name: mysql_minimum
Flavor ID: You must use an ID that is not already in use. In this
example, IDs 1 through 5 are in use, so use ID 6
.
RAM: 512
Root volume size in GB: 5
Virtual CPUs: 1
$ openstack flavor create mysql-minimum --id 6 --ram 512 --disk 5 --vcpus 1 +----------------------------+---------------+ | Field | Value | +----------------------------+---------------+ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | disk | 5 | | id | 6 | | name | mysql-minimum | | os-flavor-access:is_public | True | | properties | | | ram | 512 | | rxtx_factor | 1.0 | | swap | | | vcpus | 1 | +----------------------------+---------------+
Create a database instance
This example creates a database instance with the following characteristics:
Name of the instance: mysql_instance_1
Database flavor: 6
In addition, this command specifies these options for the instance:
A volume size of 5
(5 GB).
The myDB
database.
The database is based on the mysql
data store and the
mysql-5.5
datastore_version.
The userA
user with the password
password.
$ trove create mysql_instance_1 6 --size 5 --databases myDB \ --users userA:password --datastore_version mysql-5.5 \ --datastore mysql +-------------------+---------------------------------------------------------------------------------------t-----------------------------------------------------------------------------------------------------------------+ | Property | Value | +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | created | 2014-05-29T21:26:21 | | datastore | {u'version': u'mysql-5.5', u'type': u'mysql'} | | datastore_version | mysql-5.5 | | flavor | {u'id': u'6', u'links': [{u'href': u'https://controller:8779/v1.0/46d0bc4fc32e4b9e8520f8fc62199f58/flavors/6', u'rel': u'self'}, {u'href': u'https://controller:8779/flavors/6', u'rel': u'bookmark'}]} | | id | 5599dad6-731e-44df-bb60-488da3da9cfe | | name | mysql_instance_1 | | status | BUILD | | updated | 2014-05-29T21:26:21 | | volume | {u'size': 5} | +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Get the IP address of the database instance
First, use the trove list
command to list all instances and
their IDs:
$ trove list +--------------------------------------+------------------+-----------+-------------------+--------+-----------+------+ | id | name | datastore | datastore_version | status | flavor_id | size | +--------------------------------------+------------------+-----------+-------------------+--------+-----------+------+ | 5599dad6-731e-44df-bb60-488da3da9cfe | mysql_instance_1 | mysql | mysql-5.5 | BUILD | 6 | 5 | +--------------------------------------+------------------+-----------+-------------------+--------+-----------+------+
This command returns the instance ID of your new instance.
You can now pass in the instance ID with the trove show
command
to get the IP address of the instance. In this example, replace
INSTANCE_ID
with 5599dad6-731e-44df-bb60-488da3da9cfe
.
$ trove show INSTANCE_ID +-------------------+--------------------------------------+ | Property | Value | +-------------------+--------------------------------------+ | created | 2014-05-29T21:26:21 | | datastore | mysql | | datastore_version | mysql-5.5 | | flavor | 6 | | id | 5599dad6-731e-44df-bb60-488da3da9cfe | | ip | 172.16.200.2 | | name | mysql_instance_1 | | status | BUILD | | updated | 2014-05-29T21:26:54 | | volume | 5 | +-------------------+--------------------------------------+
This command returns the IP address of the database instance.
Access the new database
You can now access the new database you just created (myDB) by using
typical database access commands. In this MySQL example, replace
IP_ADDRESS
with 172.16.200.2
.
$ mysql -u userA -p password -h IP_ADDRESS myDB
You can use Database services to backup a database and store the backup artifact in the Object Storage service. Later on, if the original database is damaged, you can use the backup artifact to restore the database. The restore process creates a database instance.
This example shows you how to back up and restore a MySQL database.
Backup the database instance
As background, assume that you have created a database instance with the following characteristics:
Name of the database instance: guest1
Flavor ID: 10
Root volume size: 2
Databases: db1
and db2
Users: The user1
user with the password
password
First, get the ID of the guest1
database instance by using the
trove list
command:
$ trove list +--------------------------------------+--------+-----------+-------------------+--------+-----------+------+ | id | name | datastore | datastore_version | status | flavor_id | size | +--------------------------------------+--------+-----------+-------------------+--------+-----------+------+ | 97b4b853-80f6-414f-ba6f-c6f455a79ae6 | guest1 | mysql | mysql-5.5 | ACTIVE | 10 | 2 | +--------------------------------------+--------+-----------+-------------------+--------+-----------+------+
Back up the database instance by using the trove backup-create
command. In this example, the backup is called backup1
. In this
example, replace INSTANCE_ID
with
97b4b853-80f6-414f-ba6f-c6f455a79ae6
:
This command syntax pertains only to python-troveclient version 1.0.6 and later. Earlier versions require you to pass in the backup name as the first argument.
$ trove backup-create INSTANCE_ID backup1 +-------------+--------------------------------------+ | Property | Value | +-------------+--------------------------------------+ | created | 2014-03-18T17:09:07 | | description | None | | id | 8af30763-61fd-4aab-8fe8-57d528911138 | | instance_id | 97b4b853-80f6-414f-ba6f-c6f455a79ae6 | | locationRef | None | | name | backup1 | | parent_id | None | | size | None | | status | NEW | | updated | 2014-03-18T17:09:07 | +-------------+--------------------------------------+
Note that the command returns both the ID of the original instance
(instance_id
) and the ID of the backup artifact (id
).
Later on, use the trove backup-list
command to get this
information:
$ trove backup-list +--------------------------------------+--------------------------------------+---------+-----------+-----------+---------------------+ | id | instance_id | name | status | parent_id | updated | +--------------------------------------+--------------------------------------+---------+-----------+-----------+---------------------+ | 8af30763-61fd-4aab-8fe8-57d528911138 | 97b4b853-80f6-414f-ba6f-c6f455a79ae6 | backup1 | COMPLETED | None | 2014-03-18T17:09:11 | +--------------------------------------+--------------------------------------+---------+-----------+-----------+---------------------+
You can get additional information about the backup by using the
trove backup-show
command and passing in the BACKUP_ID
,
which is 8af30763-61fd-4aab-8fe8-57d528911138
.
$ trove backup-show BACKUP_ID +-------------+----------------------------------------------------+ | Property | Value | +-------------+----------------------------------------------------+ | created | 2014-03-18T17:09:07 | | description | None | | id | 8af...138 | | instance_id | 97b...ae6 | | locationRef | http://10.0.0.1:.../.../8af...138.xbstream.gz.enc | | name | backup1 | | parent_id | None | | size | 0.17 | | status | COMPLETED | | updated | 2014-03-18T17:09:11 | +-------------+----------------------------------------------------+
Restore a database instance
Now assume that your guest1
database instance is damaged and you
need to restore it. In this example, you use the trove create
command to create a new database instance called guest2
.
You specify that the new guest2
instance has the same flavor
(10
) and the same root volume size (2
) as the original
guest1
instance.
You use the --backup
argument to indicate that this new
instance is based on the backup artifact identified by
BACKUP_ID
. In this example, replace BACKUP_ID
with
8af30763-61fd-4aab-8fe8-57d528911138
.
$ trove create guest2 10 --size 2 --backup BACKUP_ID +-------------------+----------------------------------------------+ | Property | Value | +-------------------+----------------------------------------------+ | created | 2014-03-18T17:12:03 | | datastore | {u'version': u'mysql-5.5', u'type': u'mysql'}| |datastore_version | mysql-5.5 | | flavor | {u'id': u'10', u'links': [{u'href': ...]} | | id | ac7a2b35-a9b4-4ff6-beac-a1bcee86d04b | | name | guest2 | | status | BUILD | | updated | 2014-03-18T17:12:03 | | volume | {u'size': 2} | +-------------------+----------------------------------------------+
Verify backup
Now check that the new guest2
instance has the same
characteristics as the original guest1
instance.
Start by getting the ID of the new guest2
instance.
$ trove list +-----------+--------+-----------+-------------------+--------+-----------+------+ | id | name | datastore | datastore_version | status | flavor_id | size | +-----------+--------+-----------+-------------------+--------+-----------+------+ | 97b...ae6 | guest1 | mysql | mysql-5.5 | ACTIVE | 10 | 2 | | ac7...04b | guest2 | mysql | mysql-5.5 | ACTIVE | 10 | 2 | +-----------+--------+-----------+-------------------+--------+-----------+------+
Use the trove show
command to display information about the new
guest2 instance. Pass in guest2's INSTANCE_ID
, which is
ac7a2b35-a9b4-4ff6-beac-a1bcee86d04b
.
$ trove show INSTANCE_ID +-------------------+--------------------------------------+ | Property | Value | +-------------------+--------------------------------------+ | created | 2014-03-18T17:12:03 | | datastore | mysql | | datastore_version | mysql-5.5 | | flavor | 10 | | id | ac7a2b35-a9b4-4ff6-beac-a1bcee86d04b | | ip | 10.0.0.3 | | name | guest2 | | status | ACTIVE | | updated | 2014-03-18T17:12:06 | | volume | 2 | | volume_used | 0.18 | +-------------------+--------------------------------------+
Note that the data store, flavor ID, and volume size have the same
values as in the original guest1
instance.
Use the trove database-list
command to check that the original
databases (db1
and db2
) are present on the restored instance.
$ trove database-list INSTANCE_ID +--------------------+ | name | +--------------------+ | db1 | | db2 | | performance_schema | | test | +--------------------+
Use the trove user-list
command to check that the original user
(user1
) is present on the restored instance.
$ trove user-list INSTANCE_ID +--------+------+-----------+ | name | host | databases | +--------+------+-----------+ | user1 | % | db1, db2 | +--------+------+-----------+
Notify users
Tell the users who were accessing the now-disabled guest1
database instance that they can now access guest2
. Provide them
with guest2
's name, IP address, and any other information they
might need. (You can get this information by using the
trove show
command.)
Clean up
At this point, you might want to delete the disabled guest1
instance, by using the trove delete
command.
$ trove delete INSTANCE_ID
Incremental backups let you chain together a series of backups. You start with a regular backup. Then, when you want to create a subsequent incremental backup, you specify the parent backup.
Restoring a database instance from an incremental backup is the same as creating a database instance from a regular backup—the Database service handles the complexities of applying the chain of incremental backups.
This example shows you how to use incremental backups with a MySQL database.
Assumptions. Assume that you have created a regular backup for the following database instance:
Instance name: guest1
ID of the instance (INSTANCE_ID
):
792a6a56-278f-4a01-9997-d997fa126370
ID of the regular backup artifact (BACKUP_ID
):
6dc3a9b7-1f3e-4954-8582-3f2e4942cddd
Create your first incremental backup
Use the trove backup-create
command and specify:
The INSTANCE_ID
of the database instance you are doing the
incremental backup for (in this example,
792a6a56-278f-4a01-9997-d997fa126370
)
The name of the incremental backup you are creating: backup1.1
The BACKUP_ID
of the parent backup. In this case, the parent
is the regular backup, with an ID of
6dc3a9b7-1f3e-4954-8582-3f2e4942cddd
$ trove backup-create INSTANCE_ID backup1.1 --parent BACKUP_ID +-------------+--------------------------------------+ | Property | Value | +-------------+--------------------------------------+ | created | 2014-03-19T14:09:13 | | description | None | | id | 1d474981-a006-4f62-b25f-43d7b8a7097e | | instance_id | 792a6a56-278f-4a01-9997-d997fa126370 | | locationRef | None | | name | backup1.1 | | parent_id | 6dc3a9b7-1f3e-4954-8582-3f2e4942cddd | | size | None | | status | NEW | | updated | 2014-03-19T14:09:13 | +-------------+--------------------------------------+
Note that this command returns both the ID of the database instance
you are incrementally backing up (instance_id
) and a new ID for
the new incremental backup artifact you just created (id
).
Create your second incremental backup
The name of your second incremental backup is backup1.2
. This
time, when you specify the parent, pass in the ID of the incremental
backup you just created in the previous step (backup1.1
). In this
example, it is 1d474981-a006-4f62-b25f-43d7b8a7097e
.
$ trove backup-create INSTANCE_ID backup1.2 --parent BACKUP_ID +-------------+--------------------------------------+ | Property | Value | +-------------+--------------------------------------+ | created | 2014-03-19T14:09:13 | | description | None | | id | bb84a240-668e-49b5-861e-6a98b67e7a1f | | instance_id | 792a6a56-278f-4a01-9997-d997fa126370 | | locationRef | None | | name | backup1.2 | | parent_id | 1d474981-a006-4f62-b25f-43d7b8a7097e | | size | None | | status | NEW | | updated | 2014-03-19T14:09:13 | +-------------+--------------------------------------+
Restore using incremental backups
Now assume that your guest1
database instance is damaged and you
need to restore it from your incremental backups. In this example,
you use the trove create
command to create a new database
instance called guest2
.
To incorporate your incremental backups, you simply use the
--backup
parameter to pass in the BACKUP_ID
of your most
recent incremental backup. The Database service handles the
complexities of applying the chain of all previous incremental
backups.
$ trove create guest2 10 --size 1 --backup BACKUP_ID +-------------------+-----------------------------------------------------------+ | Property | Value | +-------------------+-----------------------------------------------------------+ | created | 2014-03-19T14:10:56 | | datastore | {u'version': u'mysql-5.5', u'type': u'mysql'} | | datastore_version | mysql-5.5 | | flavor | {u'id': u'10', u'links': | | | [{u'href': u'https://10.125.1.135:8779/v1.0/ | | | 626734041baa4254ae316de52a20b390/flavors/10', u'rel': | | | u'self'}, {u'href': u'https://10.125.1.135:8779/ | | | flavors/10', u'rel': u'bookmark'}]} | | id | a3680953-eea9-4cf2-918b-5b8e49d7e1b3 | | name | guest2 | | status | BUILD | | updated | 2014-03-19T14:10:56 | | volume | {u'size': 1} | +-------------------+-----------------------------------------------------------+
You can manage database configuration tasks by using configuration groups. Configuration groups let you set configuration options, in bulk, on one or more databases.
This example assumes you have created a MySQL database and shows you how to use a configuration group to configure it. Although this example sets just one option on one database, you can use these same procedures to set multiple options on multiple database instances throughout your environment. This can provide significant time savings in managing your cloud.
List available options
First, determine which configuration options you can set. Different data store versions have different configuration options.
List the names and IDs of all available versions of the mysql
data store:
$ trove datastore-version-list mysql +--------------------------------------+-----------+ | id | name | +--------------------------------------+-----------+ | eeb574ce-f49a-48b6-820d-b2959fcd38bb | mysql-5.5 | +--------------------------------------+-----------+
Pass in the data store version ID with the
trove configuration-parameter-list
command to get the available
options:
$ trove configuration-parameter-list DATASTORE_VERSION_ID +--------------------------------+---------+---------+----------------------+------------------+ | name | type | min | max | restart_required | +--------------------------------+---------+---------+----------------------+------------------+ | auto_increment_increment | integer | 1 | 65535 | False | | auto_increment_offset | integer | 1 | 65535 | False | | autocommit | integer | 0 | 1 | False | | bulk_insert_buffer_size | integer | 0 | 18446744073709547520 | False | | character_set_client | string | | | False | | character_set_connection | string | | | False | | character_set_database | string | | | False | | character_set_filesystem | string | | | False | | character_set_results | string | | | False | | character_set_server | string | | | False | | collation_connection | string | | | False | | collation_database | string | | | False | | collation_server | string | | | False | | connect_timeout | integer | 1 | 65535 | False | | expire_logs_days | integer | 1 | 65535 | False | | innodb_buffer_pool_size | integer | 0 | 68719476736 | True | | innodb_file_per_table | integer | 0 | 1 | True | | innodb_flush_log_at_trx_commit | integer | 0 | 2 | False | | innodb_log_buffer_size | integer | 1048576 | 4294967296 | True | | innodb_open_files | integer | 10 | 4294967296 | True | | innodb_thread_concurrency | integer | 0 | 1000 | False | | interactive_timeout | integer | 1 | 65535 | False | | join_buffer_size | integer | 0 | 4294967296 | False | | key_buffer_size | integer | 0 | 4294967296 | False | | local_infile | integer | 0 | 1 | False | | max_allowed_packet | integer | 1024 | 1073741824 | False | | max_connect_errors | integer | 1 | 18446744073709547520 | False | | max_connections | integer | 1 | 65535 | False | | max_user_connections | integer | 1 | 100000 | False | | myisam_sort_buffer_size | integer | 4 | 18446744073709547520 | False | | server_id | integer | 1 | 100000 | True | | sort_buffer_size | integer | 32768 | 18446744073709547520 | False | | sync_binlog | integer | 0 | 18446744073709547520 | False | | wait_timeout | integer | 1 | 31536000 | False | +--------------------------------+---------+---------+----------------------+------------------+
In this example, the trove configuration-parameter-list
command
returns a list of options that work with MySQL 5.5.
Create a configuration group
A configuration group contains a comma-separated list of key-value pairs. Each pair consists of a configuration option and its value.
You can create a configuration group by using the
trove configuration-create
command. The general syntax
for this command is:
$ trove configuration-create NAME VALUES --datastore DATASTORE_NAME
NAME. The name you want to use for this group.
VALUES. The list of key-value pairs.
DATASTORE_NAME. The name of the associated data store.
Set VALUES as a JSON dictionary, for example:
{"myFirstKey" : "someString", "mySecondKey" : someInt}
This example creates a configuration group called group1
.
group1
contains just one key and value pair, and this pair sets
the sync_binlog
option to 1
.
$ trove configuration-create group1 '{"sync_binlog" : 1}' --datastore mysql +----------------------+--------------------------------------+ | Property | Value | +----------------------+--------------------------------------+ | datastore_version_id | eeb574ce-f49a-48b6-820d-b2959fcd38bb | | description | None | | id | 9a9ef3bc-079b-476a-9cbf-85aa64f898a5 | | name | group1 | | values | {"sync_binlog": 1} | +----------------------+--------------------------------------+
Examine your existing configuration
Before you use the newly-created configuration group, look at how the
sync_binlog
option is configured on your database. Replace the
following sample connection values with values that connect to your
database:
$ mysql -u user7 -ppassword -h 172.16.200.2 myDB7 Welcome to the MySQL monitor. Commands end with ; or \g. ... mysql> show variables like 'sync_binlog'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | sync_binlog | 0 | +---------------+-------+
As you can see, the sync_binlog
option is currently set to 0
for the myDB7
database.
Change the database configuration using a configuration group
You can change a database's configuration by attaching a
configuration group to a database instance. You do this by using the
trove configuration-attach
command and passing in the ID of the
database instance and the ID of the configuration group.
Get the ID of the database instance:
$ trove list +-------------+------------------+-----------+-------------------+--------+-----------+------+ | id | name | datastore | datastore_version | status | flavor_id | size | +-------------+------------------+-----------+-------------------+--------+-----------+------+ | 26a265dd... | mysql_instance_7 | mysql | mysql-5.5 | ACTIVE | 6 | 5 | +-------------+------------------+-----------+-------------------+--------+-----------+------+
Get the ID of the configuration group:
$ trove configuration-list +-------------+--------+-------------+---------------------+ | id | name | description |datastore_version_id | +-------------+--------+-------------+---------------------+ | 9a9ef3bc... | group1 | None | eeb574ce... | +-------------+--------+-------------+---------------------+
Attach the configuration group to the database instance:
This command syntax pertains only to python-troveclient version 1.0.6 and later. Earlier versions require you to pass in the configuration group ID as the first argument.
$ trove configuration-attach DB_INSTANCE_ID CONFIG_GROUP_ID
Re-examine the database configuration
Display the sync_binlog
setting again:
mysql> show variables like 'sync_binlog'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | sync_binlog | 1 | +---------------+-------+
As you can see, the sync_binlog
option is now set to 1
, as
specified in the group1
configuration group.
Conclusion. Using a configuration group to set a single option on a single database is obviously a trivial example. However, configuration groups can provide major efficiencies when you consider that:
A configuration group can specify a large number of option values.
You can apply a configuration group to hundreds or thousands of database instances in your environment.
Used in this way, configuration groups let you modify your database cloud configuration, on the fly, on a massive scale.
Maintenance. There are also a number of useful maintenance features for working with configuration groups. You can:
Disassociate a configuration group from a database instance, using
the trove configuration-detach
command.
Modify a configuration group on the fly, using the
trove configuration-patch
command.
Find out what instances are using a configuration group, using the
trove configuration-instances
command.
Delete a configuration group, using the
trove configuration-delete
command. You might want to
do this if no instances use a group.
You can create a replica of an existing database instance. When you make subsequent changes to the original instance, the system automatically applies those changes to the replica.
Replicas are read-only.
When you create a replica, do not specify the --users
or
--databases
options.
You can choose a smaller volume or flavor for a replica than for the original, but the replica's volume must be big enough to hold the data snapshot from the original.
This example shows you how to replicate a MySQL database instance.
Get the instance ID
Get the ID of the original instance you want to replicate:
$ trove list +-----------+------------+-----------+-------------------+--------+-----------+------+ | id | name | datastore | datastore_version | status | flavor_id | size | +-----------+------------+-----------+-------------------+--------+-----------+------+ | 97b...ae6 | base_1 | mysql | mysql-5.5 | ACTIVE | 10 | 2 | +-----------+------------+-----------+-------------------+--------+-----------+------+
Create the replica
Create a new instance that will be a replica of the original
instance. You do this by passing in the --replica_of
option with
the trove create
command. This example creates a replica
called replica_1
. replica_1
is a replica of the original instance,
base_1
:
$ trove create replica_1 6 --size=5 --datastore_version mysql-5.5 \ --datastore mysql --replica_of ID_OF_ORIGINAL_INSTANCE
Verify replication status
Pass in replica_1
's instance ID with the trove show
command
to verify that the newly created replica_1
instance is a replica
of the original base_1
. Note that the replica_of
property is
set to the ID of base_1
.
$ trove show INSTANCE_ID_OF_REPLICA_1 +-------------------+--------------------------------------+ | Property | Value | +-------------------+--------------------------------------+ | created | 2014-09-16T11:16:49 | | datastore | mysql | | datastore_version | mysql-5.5 | | flavor | 6 | | id | 49c6eff6-ef91-4eff-91c0-efbda7e83c38 | | name | replica_1 | | replica_of | 97b4b853-80f6-414f-ba6f-c6f455a79ae6 | | status | BUILD | | updated | 2014-09-16T11:16:49 | | volume | 5 | +-------------------+--------------------------------------+
Now pass in base_1
's instance ID with the trove show
command
to list the replica(s) associated with the original instance. Note
that the replicas
property is set to the ID of replica_1
. If
there are multiple replicas, they appear as a comma-separated list.
$ trove show INSTANCE_ID_OF_BASE_1 +-------------------+--------------------------------------+ | Property | Value | +-------------------+--------------------------------------+ | created | 2014-09-16T11:04:56 | | datastore | mysql | | datastore_version | mysql-5.5 | | flavor | 6 | | id | 97b4b853-80f6-414f-ba6f-c6f455a79ae6 | | ip | 172.16.200.2 | | name | base_1 | | replicas | 49c6eff6-ef91-4eff-91c0-efbda7e83c38 | | status | ACTIVE | | updated | 2014-09-16T11:05:06 | | volume | 5 | | volume_used | 0.11 | +-------------------+--------------------------------------+
Detach the replica
If the original instance goes down, you can detach the replica. The replica becomes a standalone database instance. You can then take the new standalone instance and create a new replica of that instance.
You detach a replica using the trove detach-replica
command:
$ trove detach-replica INSTANCE_ID_OF_REPLICA
You can store data across multiple machines by setting up MongoDB sharded clusters.
Each cluster includes:
One or more shards. Each shard consists of a three member replica set (three instances organized as a replica set).
One or more query routers. A query router is the machine that your application actually connects to. This machine is responsible for communicating with the config server to figure out where the requested data is stored. It then accesses and returns the data from the appropriate shard(s).
One or more config servers. Config servers store the metadata that links requested data with the shard that contains it.
This example shows you how to set up a MongoDB sharded cluster.
Before you begin. Make sure that:
The administrative user has registered a MongoDB datastore type and version.
The administrative user has created an appropriate Section 4.19.1, “Create and access a database”.
Create a cluster
Create a cluster by using the trove cluster-create
command. This
command creates a one-shard cluster. Pass in:
The name of the cluster.
The name and version of the datastore you want to use.
The three instances you want to include in the replication set for
the first shard. Specify each instance by using the --instance
argument and the associated flavor ID and volume size. Use the
same flavor ID and volume size for each instance. In this example,
flavor 7
is a custom flavor that meets the MongoDB minimum
requirements.
$ trove cluster-create cluster1 mongodb "2.4" \ --instance flavor_id=7,volume=2 --instance flavor_id=7,volume=2 \ --instance flavor_id=7,volume=2 +-------------------+--------------------------------------+ | Property | Value | +-------------------+--------------------------------------+ | created | 2014-08-16T01:46:51 | | datastore | mongodb | | datastore_version | 2.4 | | id | aa6ef0f5-dbef-48cd-8952-573ad881e717 | | name | cluster1 | | task_description | Building the initial cluster. | | task_name | BUILDING | | updated | 2014-08-16T01:46:51 | +-------------------+--------------------------------------+
Display cluster information
Display information about a cluster by using the
trove cluster-show
command. Pass in the ID of the cluster.
The cluster ID displays when you first create a cluster. (If you need
to find it later on, use the trove cluster-list
command to list
the names and IDs of all the clusters in your system.)
$ trove cluster-show CLUSTER_ID +-------------------+--------------------------------------+ | Property | Value | +-------------------+--------------------------------------+ | created | 2014-08-16T01:46:51 | | datastore | mongodb | | datastore_version | 2.4 | | id | aa6ef0f5-dbef-48cd-8952-573ad881e717 | | ip | 10.0.0.2 | | name | cluster1 | | task_description | No tasks for the cluster. | | task_name | NONE | | updated | 2014-08-16T01:59:33 | +-------------------+--------------------------------------+
Your application connects to this IP address. The trove cluster-show
command displays the IP address of the query router.
This is the IP address your application uses to retrieve data from
the database.
List cluster instances
List the instances in a cluster by using the
trove cluster-instances
command.
$ trove cluster-instances CLUSTER_ID +--------------------------------------+----------------+-----------+------+ | ID | Name | Flavor ID | Size | +--------------------------------------+----------------+-----------+------+ | 45532fc4-661c-4030-8ca4-18f02aa2b337 | cluster1-rs1-1 | 7 | 2 | | 7458a98d-6f89-4dfd-bb61-5cf1dd65c121 | cluster1-rs1-2 | 7 | 2 | | b37634fb-e33c-4846-8fe8-cf2b2c95e731 | cluster1-rs1-3 | 7 | 2 | +--------------------------------------+----------------+-----------+------+
Naming conventions for replication sets and instances. Note
that the Name
column displays an instance name that includes the
replication set name. The replication set names and instance names
are automatically generated, following these rules:
Replication set name. This name consists of the cluster
name, followed by the string -rsn, where n is 1 for
the first replication set you create, 2 for the second replication
set, and so on. In this example, the cluster name is cluster1
,
and there is only one replication set, so the replication set name
is cluster1-rs1
.
Instance name. This name consists of the replication set
name followed by the string -n, where n is 1 for the
first instance in a replication set, 2 for the second
instance, and so on. In this example, the instance names are
cluster1-rs1-1
, cluster1-rs1-2
, and cluster1-rs1-3
.
List clusters
List all the clusters in your system, using the
trove cluster-list
command.
$ trove cluster-list +--------------------------------------+----------+-----------+-------------------+-----------+ | ID | Name | Datastore | Datastore Version | Task Name | +--------------------------------------+----------+-----------+-------------------+-----------+ | aa6ef0f5-dbef-48cd-8952-573ad881e717 | cluster1 | mongodb | 2.4 | NONE | | b8829c2a-b03a-49d3-a5b1-21ec974223ee | cluster2 | mongodb | 2.4 | BUILDING | +--------------------------------------+----------+-----------+-------------------+-----------+
Delete a cluster
Delete a cluster, using the trove cluster-delete
command.
$ trove cluster-delete CLUSTER_ID
Each cluster includes at least one query router and one config server. Query routers and config servers count against your quota. When you delete a cluster, the system deletes the associated query router(s) and config server(s).