Installation Guide
This document is continually updated and reflects the latest
available code of the Bare Metal service (ironic).
Users of releases may encounter differences and are encouraged
to look at earlier versions of this document for guidance.
Service overview
The Bare Metal service is a collection of components that provides support to
manage and provision physical machines.
Also known as the ironic project, the Bare Metal service may, depending
upon configuration, interact with several other OpenStack services. This
includes:
- the OpenStack Telemetry module (ceilometer) for consuming the IPMI metrics
- the OpenStack Identity service (keystone) for request authentication and to
locate other OpenStack services
- the OpenStack Image service (glance) from which to retrieve images and image meta-data
- the OpenStack Networking service (neutron) for DHCP and network configuration
- the OpenStack Compute service (nova) works with the Bare Metal service and acts as
a user-facing API for instance management, while the Bare Metal service provides
the admin/operator API for hardware management.
The OpenStack Compute service also provides scheduling facilities (matching
flavors <-> images <-> hardware), tenant quotas, IP assignment, and other
services which the Bare Metal service does not, in and of itself, provide.
- the OpenStack Block Storage (cinder) provides volumes, but this aspect is not yet available.
The Bare Metal service includes the following components:
- ironic-api: A RESTful API that processes application requests by sending
them to the ironic-conductor over RPC.
- ironic-conductor: Adds/edits/deletes nodes; powers on/off nodes with
ipmi or ssh; provisions/deploys/decommissions bare metal nodes.
- ironic-python-agent: A python service which is run in a temporary ramdisk to
provide ironic-conductor service(s) with remote access and in-band hardware
control.
- python-ironicclient: A command-line interface (CLI) for interacting with
the Bare Metal service.
Additionally, the Bare Metal service has certain external dependencies, which are
very similar to other OpenStack services:
- A database to store hardware information and state. You can set the database
back-end type and location. A simple approach is to use the same database
back end as the Compute service. Another approach is to use a separate
database back-end to further isolate bare metal resources (and associated
metadata) from users.
- A queue. A central hub for passing messages, such as RabbitMQ.
It should use the same implementation as that of the Compute service.
Optionally, one may wish to utilize the following associated projects for
additional functionality:
- ironic-inspector; An associated service which performs in-band hardware
introspection by PXE booting unregistered hardware into a “discovery ramdisk”.
- diskimage-builder; May be used to customize machine images, create and
discovery deploy ramdisks, if necessary.
Image requirements
Bare Metal provisioning requires two sets of images: the deploy images
and the user images. The deploy images are used by the Bare Metal service
to prepare the bare metal server for actual OS deployment. Whereas the
user images are installed on the bare metal server to be used by the
end user. Below are the steps to create the required images and add
them to the Image service:
- The disk-image-builder can be used to create images required for
deployment and the actual OS which the user is going to run.
Install diskimage-builder package (use virtualenv, if you don’t
want to install anything globally):
sudo pip install diskimage-builder
Build the image your users will run (Ubuntu image has been taken as
an example):
disk-image-create ubuntu baremetal dhcp-all-interfaces grub2 -o my-image
The above command creates my-image.qcow2, my-image.vmlinuz and
my-image.initrd files. If you want to use Fedora image, replace
ubuntu with fedora in the above command. The grub2 element is
only needed if local boot will be used to deploy my-image.qcow2,
otherwise the images my-image.vmlinuz and my-image.initrd
will be used for PXE booting after deploying the bare metal with
my-image.qcow2.
To build the deploy image take a look at the Building or
downloading a deploy ramdisk image section.
Add the user images to the Image service
Load all the images created in the below steps into the Image service,
and note the image UUIDs in the Image service for each one as it is
generated.
Add the kernel and ramdisk images to the Image service:
glance image-create --name my-kernel --visibility public \
--disk-format aki --container-format aki < my-image.vmlinuz
Store the image uuid obtained from the above step as
$MY_VMLINUZ_UUID.
glance image-create --name my-image.initrd --visibility public \
--disk-format ari --container-format ari < my-image.initrd
Store the image UUID obtained from the above step as
$MY_INITRD_UUID.
Add the my-image to the Image service which is going to be the OS
that the user is going to run. Also associate the above created
images with this OS image. These two operations can be done by
executing the following command:
glance image-create --name my-image --visibility public \
--disk-format qcow2 --container-format bare --property \
kernel_id=$MY_VMLINUZ_UUID --property \
ramdisk_id=$MY_INITRD_UUID < my-image.qcow2
Note: To deploy a whole disk image, a kernel_id and a ramdisk_id
shouldn’t be associated with the image. An example is as follows:
glance image-create --name my-whole-disk-image --visibility public \
--disk-format qcow2 \
--container-format bare < my-whole-disk-image.qcow2
Add the deploy images to the Image service
Add the my-deploy-ramdisk.kernel and
my-deploy-ramdisk.initramfs images to the Image service:
glance image-create --name deploy-vmlinuz --visibility public \
--disk-format aki --container-format aki < my-deploy-ramdisk.kernel
Store the image UUID obtained from the above step as
$DEPLOY_VMLINUZ_UUID.
glance image-create --name deploy-initrd --visibility public \
--disk-format ari --container-format ari < my-deploy-ramdisk.initramfs
Store the image UUID obtained from the above step as
$DEPLOY_INITRD_UUID.
Flavor creation
You’ll need to create a special bare metal flavor in the Compute service.
The flavor is mapped to the bare metal node through the hardware specifications.
Change these to match your hardware:
RAM_MB=1024
CPU=2
DISK_GB=100
ARCH={i686|x86_64}
Create the bare metal flavor by executing the following command:
nova flavor-create my-baremetal-flavor auto $RAM_MB $DISK_GB $CPU
Note: You can replace auto with your own flavor id.
A flavor can include a set of key/value pairs called extra_specs.
In case of Icehouse version of the Bare Metal service, you need to associate the
deploy ramdisk and deploy kernel images to the flavor as flavor-keys.
But in case of Juno and higher versions, this is deprecated. Because these
may vary between nodes in a heterogeneous environment, the deploy kernel
and ramdisk images should be associated with each node’s driver_info.
Icehouse version of Bare Metal service:
nova flavor-key my-baremetal-flavor set \
cpu_arch=$ARCH \
"baremetal:deploy_kernel_id"=$DEPLOY_VMLINUZ_UUID \
"baremetal:deploy_ramdisk_id"=$DEPLOY_INITRD_UUID
Juno version of Bare Metal service:
nova flavor-key my-baremetal-flavor set cpu_arch=$ARCH
Associate the deploy ramdisk and deploy kernel images each of your
node’s driver_info:
ironic node-update $NODE_UUID add \
driver_info/pxe_deploy_kernel=$DEPLOY_VMLINUZ_UUID \
driver_info/pxe_deploy_ramdisk=$DEPLOY_INITRD_UUID
Kilo and higher versions of Bare Metal service:
nova flavor-key my-baremetal-flavor set cpu_arch=$ARCH
Associate the deploy ramdisk and deploy kernel images each of your
node’s driver_info:
ironic node-update $NODE_UUID add \
driver_info/deploy_kernel=$DEPLOY_VMLINUZ_UUID \
driver_info/deploy_ramdisk=$DEPLOY_INITRD_UUID
Local boot with partition images
Starting with the Kilo release, Bare Metal service supports local boot with
partition images, meaning that after the deployment the node’s subsequent
reboots won’t happen via PXE or Virtual Media. Instead, it will boot from a
local boot loader installed on the disk.
It’s important to note that in order for this to work the image being
deployed with Bare Metal serivce must contain grub2 installed within it.
Enabling the local boot is different when Bare Metal service is used with
Compute service and without it.
The following sections will describe both methods.
Enabling local boot with Compute service
To enable local boot we need to set a capability on the bare metal node,
for example:
ironic node-update <node-uuid> add properties/capabilities="boot_option:local"
Nodes having boot_option set to local may be requested by adding
an extra_spec to the Compute service flavor, for example:
nova flavor-key baremetal set capabilities:boot_option="local"
Note
If the node is configured to use UEFI, Bare Metal service will create
an EFI partition on the disk and switch the partition table format to
gpt. The EFI partition will be used later by the boot loader
(which is installed from the deploy ramdisk).
Enabling local boot without Compute
Since adding capabilities to the node’s properties is only used by
the nova scheduler to perform more advanced scheduling of instances,
we need a way to enable local boot when Compute is not present. To do that
we can simply specify the capability via the instance_info attribute
of the node, for example:
ironic node-update <node-uuid> add instance_info/capabilities='{"boot_option": "local"}'
Enrollment
After all the services have been properly configured, you should enroll your
hardware with the Bare Metal service, and confirm that the Compute service sees
the available hardware. The nodes will be visible to the Compute service once
they are in the available provision state.
Note
After enrolling nodes with the Bare Metal service, the Compute service
will not be immediately notified of the new resources. The Compute service’s
resource tracker syncs periodically, and so any changes made directly to the
Bare Metal service’s resources will become visible in the Compute service
only after the next run of that periodic task.
More information is in the Troubleshooting section below.
Note
Any bare metal node that is visible to the Compute service may have a
workload scheduled to it, if both the power and deploy interfaces
pass the validate check.
If you wish to exclude a node from the Compute service’s scheduler, for
instance so that you can perform maintenance on it, you can set the node to
“maintenance” mode.
For more information see the Maintenance Mode section below.
Enrollment process
This section describes the main steps to enroll a node and make it available
for provisioning. Some steps are shown separately for illustration purposes,
and may be combined if desired.
Create a node in the Bare Metal service. At a minimum, you must
specify the driver name (for example, “pxe_ipmitool”).
This will return the node UUID along with other information
about the node. The node’s provision state will be available. (The
example assumes that the client is using the default API version.):
ironic node-create -d pxe_ipmitool
+--------------+--------------------------------------+
| Property | Value |
+--------------+--------------------------------------+
| uuid | dfc6189f-ad83-4261-9bda-b27258eb1987 |
| driver_info | {} |
| extra | {} |
| driver | pxe_ipmitool |
| chassis_uuid | |
| properties | {} |
| name | None |
+--------------+--------------------------------------+
ironic node-show dfc6189f-ad83-4261-9bda-b27258eb1987
+------------------------+--------------------------------------+
| Property | Value |
+------------------------+--------------------------------------+
| target_power_state | None |
| extra | {} |
| last_error | None |
| maintenance_reason | None |
| provision_state | available |
| uuid | dfc6189f-ad83-4261-9bda-b27258eb1987 |
| console_enabled | False |
| target_provision_state | None |
| provision_updated_at | None |
| maintenance | False |
| power_state | None |
| driver | pxe_ipmitool |
| properties | {} |
| instance_uuid | None |
| name | None |
| driver_info | {} |
| ... | ... |
+------------------------+--------------------------------------+
Beginning with the Kilo release a node may also be referred to by a logical
name as well as its UUID. To utilize this new feature a name must be
assigned to the node. This can be done when the node is created by
adding the -n option to the node-create command or by updating an
existing node with the node-update command. See Logical Names for
examples.
Beginning with the Liberty release, with API version 1.11 and above, a newly
created node will have an initial provision state of enroll as opposed to
available. See Enrolling a node for more details.
Update the node driver_info so that Bare Metal service can manage the
node. Different drivers may require different information about the node.
You can determine this with the driver-properties command, as follows:
ironic driver-properties pxe_ipmitool
+----------------------+-------------------------------------------------------------------------------------------------------------+
| Property | Description |
+----------------------+-------------------------------------------------------------------------------------------------------------+
| ipmi_address | IP address or hostname of the node. Required. |
| ipmi_password | password. Optional. |
| ipmi_username | username; default is NULL user. Optional. |
| ... | ... |
| deploy_kernel | UUID (from Glance) of the deployment kernel. Required. |
| deploy_ramdisk | UUID (from Glance) of the ramdisk that is mounted at boot time. Required. |
+----------------------+-------------------------------------------------------------------------------------------------------------+
ironic node-update $NODE_UUID add \
driver_info/ipmi_username=$USER \
driver_info/ipmi_password=$PASS \
driver_info/ipmi_address=$ADDRESS
Note that you may also specify all driver_info parameters during
node-create by passing the -i option multiple times.
Update the node’s properties to match the bare metal flavor you created
earlier:
ironic node-update $NODE_UUID add \
properties/cpus=$CPU \
properties/memory_mb=$RAM_MB \
properties/local_gb=$DISK_GB \
properties/cpu_arch=$ARCH
As above, these can also be specified at node creation by passing the -p
option to node-create multiple times.
If you wish to perform more advanced scheduling of the instances based on
hardware capabilities, you may add metadata to each node that will be
exposed to the nova scheduler (see: ComputeCapabilitiesFilter). A full
explanation of this is outside of the scope of this document. It can be done
through the special capabilities member of node properties:
ironic node-update $NODE_UUID add \
properties/capabilities=key1:val1,key2:val2
As mentioned in the Flavor Creation section, if using the Kilo or later
release of Bare Metal service, you should specify a deploy kernel and
ramdisk which correspond to the node’s driver, for example:
ironic node-update $NODE_UUID add \
driver_info/deploy_kernel=$DEPLOY_VMLINUZ_UUID \
driver_info/deploy_ramdisk=$DEPLOY_INITRD_UUID
You must also inform Bare Metal service of the network interface cards which
are part of the node by creating a port with each NIC’s MAC address.
These MAC addresses are passed to the Networking service during instance
provisioning and used to configure the network appropriately:
ironic port-create -n $NODE_UUID -a $MAC_ADDRESS
To check if Bare Metal service has the minimum information necessary for
a node’s driver to function, you may validate it:
ironic node-validate $NODE_UUID
+------------+--------+--------+
| Interface | Result | Reason |
+------------+--------+--------+
| console | True | |
| deploy | True | |
| management | True | |
| power | True | |
+------------+--------+--------+
If the node fails validation, each driver will return information as to why
it failed:
ironic node-validate $NODE_UUID
+------------+--------+-------------------------------------------------------------------------------------------------------------------------------------+
| Interface | Result | Reason |
+------------+--------+-------------------------------------------------------------------------------------------------------------------------------------+
| console | None | not supported |
| deploy | False | Cannot validate iSCSI deploy. Some parameters were missing in node's instance_info. Missing are: ['root_gb', 'image_source'] |
| management | False | Missing the following IPMI credentials in node's driver_info: ['ipmi_address']. |
| power | False | Missing the following IPMI credentials in node's driver_info: ['ipmi_address']. |
+------------+--------+-------------------------------------------------------------------------------------------------------------------------------------+
If using API version 1.11 or above, the node was created in the enroll
provision state. In order for the node to be available for deploying a
workload (for example, by the Compute service), it needs to be in the
available provision state. To do this, it must be moved into the
manageable state and then moved into the available state. The
API version 1.11 and above section describes the commands for this.
Enrolling a node
In the Liberty cycle, starting with API version 1.11, the Bare Metal service
added a new initial provision state of enroll to its state machine.
Existing automation tooling that use an API version lower than 1.11 are not
affected, since the initial provision state is still available.
However, using API version 1.11 or above may break existing automation tooling
with respect to node creation.
The default API version used by (the most recent) python-ironicclient is 1.9.
The examples below set the API version for each command. To set the
API version for all commands, you can set the environment variable
IRONIC_API_VERSION.
API version 1.10 and below
Below is an example of creating a node with API version 1.10. After creation,
the node will be in the available provision state.
Other API versions below 1.10 may be substituted in place of 1.10.
ironic --ironic-api-version 1.10 node-create -d agent_ilo -n pre11
+--------------+--------------------------------------+
| Property | Value |
+--------------+--------------------------------------+
| uuid | cc4998a0-f726-4927-9473-0582458c6789 |
| driver_info | {} |
| extra | {} |
| driver | agent_ilo |
| chassis_uuid | |
| properties | {} |
| name | pre11 |
+--------------+--------------------------------------+
ironic --ironic-api-version 1.10 node-list
+--------------------------------------+-------+---------------+-------------+--------------------+-------------+
| UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance |
+--------------------------------------+-------+---------------+-------------+--------------------+-------------+
| cc4998a0-f726-4927-9473-0582458c6789 | pre11 | None | None | available | False |
+--------------------------------------+-------+---------------+-------------+--------------------+-------------+
API version 1.11 and above
Beginning with API version 1.11, the initial provision state for newly created
nodes is enroll. In the examples below, other API versions above 1.11 may be
substituted in place of 1.11.
ironic --ironic-api-version 1.11 node-create -d agent_ilo -n post11
+--------------+--------------------------------------+
| Property | Value |
+--------------+--------------------------------------+
| uuid | 0eb013bb-1e4b-4f4c-94b5-2e7468242611 |
| driver_info | {} |
| extra | {} |
| driver | agent_ilo |
| chassis_uuid | |
| properties | {} |
| name | post11 |
+--------------+--------------------------------------+
ironic --ironic-api-version 1.11 node-list
+--------------------------------------+--------+---------------+-------------+--------------------+-------------+
| UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance |
+--------------------------------------+--------+---------------+-------------+--------------------+-------------+
| 0eb013bb-1e4b-4f4c-94b5-2e7468242611 | post11 | None | None | enroll | False |
+--------------------------------------+--------+---------------+-------------+--------------------+-------------+
In order for nodes to be available for deploying workloads on them, nodes
must be in the available provision state. To do this, nodes
created with API version 1.11 and above must be moved from the enroll state
to the manageable state and then to the available state.
To move a node to a different provision state, use the
node-set-provision-state command.
Note
Since it is an asychronous call, the response for
ironic node-set-provision-state will not indicate whether the
transition succeeded or not. You can check the status of the
operation via ironic node-show. If it was successful,
provision_state will be in the desired state. If it failed,
there will be information in the node’s last_error.
After creating a node and before moving it from its initial provision state of
enroll, basic power and port information needs to be configured on the node.
The Bare Metal service needs this information because it verifies that it is
capable of controlling the node when transitioning the node from enroll to
manageable state.
To move a node from enroll to manageable provision state:
ironic --ironic-api-version 1.11 node-set-provision-state $NODE_UUID manage
ironic node-show $NODE_UUID
+------------------------+--------------------------------------------------------------------+
| Property | Value |
+------------------------+--------------------------------------------------------------------+
| ... | ... |
| provision_state | manageable | <- verify correct state
| uuid | 0eb013bb-1e4b-4f4c-94b5-2e7468242611 |
| ... | ... |
+------------------------+--------------------------------------------------------------------+
When a node is moved from the manageable to available provision
state, the node will be cleaned if configured to do so (see
Configure the Bare Metal service for cleaning).
To move a node from manageable to available provision state:
ironic --ironic-api-version 1.11 node-set-provision-state $NODE_UUID provide
ironic node-show $NODE_UUID
+------------------------+--------------------------------------------------------------------+
| Property | Value |
+------------------------+--------------------------------------------------------------------+
| ... | ... |
| provision_state | available | < - verify correct state
| uuid | 0eb013bb-1e4b-4f4c-94b5-2e7468242611 |
| ... | ... |
+------------------------+--------------------------------------------------------------------+
For more details on the Bare Metal service’s state machine, see the
state machine
documentation.
Logical names
Beginning with the Kilo release a Node may also be referred to by a
logical name as well as its UUID. Names can be assigned either when
creating the node by adding the -n option to the node-create command or
by updating an existing node with the node-update command.
Node names must be unique, and conform to:
The node is named ‘example’ in the following examples:
ironic node-create -d agent_ipmitool -n example
or:
ironic node-update $NODE_UUID add name=example
Once assigned a logical name, a node can then be referred to by name or
UUID interchangeably.
ironic node-create -d agent_ipmitool -n example
+--------------+--------------------------------------+
| Property | Value |
+--------------+--------------------------------------+
| uuid | 71e01002-8662-434d-aafd-f068f69bb85e |
| driver_info | {} |
| extra | {} |
| driver | agent_ipmitool |
| chassis_uuid | |
| properties | {} |
| name | example |
+--------------+--------------------------------------+
ironic node-show example
+------------------------+--------------------------------------+
| Property | Value |
+------------------------+--------------------------------------+
| target_power_state | None |
| extra | {} |
| last_error | None |
| updated_at | 2015-04-24T16:23:46+00:00 |
| ... | ... |
| instance_info | {} |
+------------------------+--------------------------------------+
Hardware Inspection
Starting with the Kilo release, Bare Metal service supports hardware inspection
that simplifies enrolling nodes.
Inspection allows Bare Metal service to discover required node properties
once required driver_info fields (for example, IPMI credentials) are set
by an operator. Inspection will also create the Bare Metal service ports for the
discovered ethernet MACs. Operators will have to manually delete the Bare Metal
service ports for which physical media is not connected. This is required due
to the bug 1405131.
There are two kinds of inspection supported by Bare Metal service:
Out-of-band inspection is currently implemented by iLO drivers, listed at
iLO drivers.
In-band inspection is performed by utilizing the ironic-inspector project.
This is supported by the following drivers:
pxe_drac
pxe_ipmitool
pxe_ipminative
pxe_ssh
This feature needs to be explicitly enabled in the configuration
by setting enabled = True in [inspector] section.
You must additionally install python-ironic-inspector-client to use
this functionality.
You must set service_url if the ironic-inspector service is
being run on a separate host from the ironic-conductor service, or is using
non-standard port.
In order to ensure that ports in Bare Metal service are synchronized with
NIC ports on the node, the following settings in the ironic-inspector
configuration file must be set:
[processing]
add_ports = all
keep_ports = present
Note
During Kilo cycle we used on older verions of Inspector called
ironic-discoverd. Inspector is expected to be a mostly drop-in
replacement, and the same client library should be used to connect to both.
For Kilo, install ironic-discoverd of version 1.1.0 or higher
instead of python-ironic-inspector-client and use [discoverd] option
group in both Bare Metal service and ironic-discoverd configuration
files instead of ones provided above.
Inspection can be initiated using node-set-provision-state.
The node should be in MANAGEABLE state before inspection is initiated.
Specifying the disk for deployment
Starting with the Kilo release, Bare Metal service supports passing hints to the
deploy ramdisk about which disk it should pick for the deployment. In
Linux when a server has more than one SATA, SCSI or IDE disk controller,
the order in which their corresponding device nodes are added is arbitrary
[`link`_], resulting in devices like /dev/sda and /dev/sdb to
switch around between reboots. Therefore, to guarantee that a specific
disk is always chosen for the deployment, Bare Metal service introduced
root device hints.
The list of support hints is:
- model (STRING): device identifier
- vendor (STRING): device vendor
- serial (STRING): disk serial number
- wwn (STRING): unique storage identifier
- size (INT): size of the device in GiB
To associate one or more hints with a node, update the node’s properties
with a root_device key, for example:
ironic node-update <node-uuid> add properties/root_device='{"wwn": "0x4000cca77fc4dba1"}'
That will guarantee that Bare Metal service will pick the disk device that
has the wwn equal to the specified wwn value, or fail the deployment if it
can not be found.
Note
If multiple hints are specified, a device must satisfy all the hints.
Enabling the configuration drive (configdrive)
Starting with the Kilo release, the Bare Metal service supports exposing
a configuration drive image to the instances.
The configuration drive is usually used in conjunction with the Compute
service, but the Bare Metal service also offers a standalone way of using it.
The following sections will describe both methods.
When used with Compute service
To enable the configuration drive when deploying an instance, pass
--config-drive true parameter to the nova boot command, for example:
nova boot --config-drive true --flavor baremetal --image test-image instance-1
It’s also possible to enable the configuration drive automatically on
all instances by configuring the OpenStack Compute service to always
create a configuration drive by setting the following option in the
/etc/nova/nova.conf file, for example:
[DEFAULT]
...
force_config_drive=True
When used standalone
When used without the Compute service, the operator needs to create a configuration drive
and provide the file or HTTP URL to the Bare Metal service.
For the format of the configuration drive, Bare Metal service expects a
gzipped and base64 encoded ISO 9660 file with a config-2
label. The
ironic client
can generate a configuration drive in the expected format. Just pass a
directory path containing the files that will be injected into it via the
--config-drive parameter of the node-set-provision-state command,
for example:
ironic node-set-provision-state --config-drive /dir/configdrive_files $node_identifier active
Accessing the configuration drive data
When the configuration drive is enabled, the Bare Metal service will create a partition on the
instance disk and write the configuration drive image onto it. The
configuration drive must be mounted before use. This is performed
automatically by many tools, such as cloud-init and cloudbase-init. To mount
it manually on a Linux distribution that supports accessing devices by labels,
simply run the following:
mkdir -p /mnt/config
mount /dev/disk/by-label/config-2 /mnt/config
If the guest OS doesn’t support accessing devices by labels, you can use
other tools such as blkid to identify which device corresponds to
the configuration drive and mount it, for example:
CONFIG_DEV=$(blkid -t LABEL="config-2" -odevice)
mkdir -p /mnt/config
mount $CONFIG_DEV /mnt/config
Cloud-init integration
The configuration drive can be especially
useful when used with cloud-init [link],
but in order to use it we should follow some rules:
Cloud-init expects a specific format to the data. For
more information about the expected file layout see [link].
Since the Bare Metal service uses a disk partition as the configuration drive,
it will only work with cloud-init version >= 0.7.5 [link].
Cloud-init has a collection of data source modules, so when
building the image with disk-image-builder we have to define
DIB_CLOUD_INIT_DATASOURCES environment variable and set the
appropriate sources to enable the configuration drive, for example:
DIB_CLOUD_INIT_DATASOURCES="ConfigDrive, OpenStack" disk-image-create -o fedora-cloud-image fedora baremetal
See [link]
for more information.
Building or downloading a deploy ramdisk image
Ironic depends on having an image with the ironic-python-agent (IPA)
service running on it for controlling and deploying bare metal nodes.
You can download a pre-built version of the deploy ramdisk built with
the CoreOS tools at:
Building from source
There are two known methods for creating the deployment image with the
IPA service:
disk-image-builder
Install disk-image-builder from pip or from your distro’s packages:
sudo pip install diskimage-builder
Create the image:
disk-image-create ironic-agent fedora -o ironic-deploy
The above command creates the deploy ramdisk and kernel named
ironic-deploy.vmlinuz and ironic-deploy.initramfs in your
current directory.
Or, create an ISO image to boot with virtual media:
disk-image-create ironic-agent fedora iso -o ironic-deploy
The above command creates the deploy ISO named ironic-deploy.iso
in your current directory.
Note
Fedora was used as an example for the base operational system. Please
check the diskimage-builder documentation for other supported
operational systems.
Trusted boot with partition image
Starting with the Liberty release, Ironic supports trusted boot with partition
image. This means at the end of the deployment process, when the node is
rebooted with the new user image, trusted boot will be performed. It will
measure the node’s BIOS, boot loader, Option ROM and the Kernel/Ramdisk, to
determine whether a bare metal node deployed by Ironic should be trusted.
It’s important to note that in order for this to work the node being deployed
must have Intel TXT hardware support. The image being deployed with
Ironic must have oat-client installed within it.
The following will describe how to enable trusted boot and boot
with PXE and Nova:
Create a customized user image with oat-client installed:
disk-image-create -u fedora baremetal oat-client -o $TRUST_IMG
For more information on creating customized images, see ImageRequirement.
Enable VT-x, VT-d, TXT and TPM on the node. This can be done manually through
the BIOS. Depending on the platform, several reboots may be needed.
Enroll the node and update the node capability value:
ironic node-create -d pxe_ipmitool
ironic node-update $NODE_UUID add properties/capabilities={'trusted_boot':true}
Create a special flavor:
nova flavor-key $TRUST_FLAVOR_UUID set 'capabilities:trusted_boot'=true
Prepare tboot and mboot.c32 and put them into tftp_root or http_root
directory on all nodes with the ironic-conductor processes:
Ubuntu:
cp /usr/lib/syslinux/mboot.c32 /tftpboot/
Fedora:
cp /usr/share/syslinux/mboot.c32 /tftpboot/
Note: The actual location of mboot.c32 varies among different distribution versions.
tboot can be downloaded from
https://sourceforge.net/projects/tboot/files/latest/download
Install an OAT Server. An OAT Server should be running and configured correctly.
Boot an instance with Nova:
nova boot --flavor $TRUST_FLAVOR_UUID --image $TRUST_IMG --user-data $TRUST_SCRIPT trusted_instance
Note that the node will be measured during trusted boot and the hash values saved
into TPM. An example of TRUST_SCRIPT can be found in trust script example.
Verify the result via OAT Server.
This is outside the scope of Ironic. At the moment, users can manually verify the result
by following the manual verify steps.
Troubleshooting
Once all the services are running and configured properly, and a node has been
enrolled with the Bare Metal service and is in the available provision
state, the Compute service should detect the node
as an available resource and expose it to the scheduler.
Note
There is a delay, and it may take up to a minute (one periodic task cycle)
for the Compute service to recognize any changes in the Bare Metal service’s
resources (both additions and deletions).
In addition to watching nova-compute log files, you can see the available
resources by looking at the list of Compute hypervisors. The resources reported
therein should match the bare metal node properties, and the Compute service flavor.
Here is an example set of commands to compare the resources in Compute
service and Bare Metal service:
$ ironic node-list
+--------------------------------------+---------------+-------------+--------------------+-------------+
| UUID | Instance UUID | Power State | Provisioning State | Maintenance |
+--------------------------------------+---------------+-------------+--------------------+-------------+
| 86a2b1bb-8b29-4964-a817-f90031debddb | None | power off | available | False |
+--------------------------------------+---------------+-------------+--------------------+-------------+
$ ironic node-show 86a2b1bb-8b29-4964-a817-f90031debddb
+------------------------+----------------------------------------------------------------------+
| Property | Value |
+------------------------+----------------------------------------------------------------------+
| instance_uuid | None |
| properties | {u'memory_mb': u'1024', u'cpu_arch': u'x86_64', u'local_gb': u'10', |
| | u'cpus': u'1'} |
| maintenance | False |
| driver_info | { [SNIP] } |
| extra | {} |
| last_error | None |
| created_at | 2014-11-20T23:57:03+00:00 |
| target_provision_state | None |
| driver | pxe_ipmitool |
| updated_at | 2014-11-21T00:47:34+00:00 |
| instance_info | {} |
| chassis_uuid | 7b49bbc5-2eb7-4269-b6ea-3f1a51448a59 |
| provision_state | available |
| reservation | None |
| power_state | power off |
| console_enabled | False |
| uuid | 86a2b1bb-8b29-4964-a817-f90031debddb |
+------------------------+----------------------------------------------------------------------+
$ nova hypervisor-show 1
+-------------------------+--------------------------------------+
| Property | Value |
+-------------------------+--------------------------------------+
| cpu_info | baremetal cpu |
| current_workload | 0 |
| disk_available_least | - |
| free_disk_gb | 10 |
| free_ram_mb | 1024 |
| host_ip | [ SNIP ] |
| hypervisor_hostname | 86a2b1bb-8b29-4964-a817-f90031debddb |
| hypervisor_type | ironic |
| hypervisor_version | 1 |
| id | 1 |
| local_gb | 10 |
| local_gb_used | 0 |
| memory_mb | 1024 |
| memory_mb_used | 0 |
| running_vms | 0 |
| service_disabled_reason | - |
| service_host | my-test-host |
| service_id | 6 |
| state | up |
| status | enabled |
| vcpus | 1 |
| vcpus_used | 0 |
+-------------------------+--------------------------------------+
Maintenance mode
Maintenance mode may be used if you need to take a node out of the resource
pool. Putting a node in maintenance mode will prevent Bare Metal service from
executing periodic tasks associated with the node. This will also prevent
Compute service from placing a tenant instance on the node by not exposing
the node to the nova scheduler. Nodes can be placed into maintenance mode
with the following command.
$ ironic node-set-maintenance $NODE_UUID on
As of the Kilo release, a maintenance reason may be included with the optional
--reason command line option. This is a free form text field that will be
displayed in the maintenance_reason section of the node-show command.
$ ironic node-set-maintenance $UUID on --reason "Need to add ram."
$ ironic node-show $UUID
+------------------------+--------------------------------------+
| Property | Value |
+------------------------+--------------------------------------+
| target_power_state | None |
| extra | {} |
| last_error | None |
| updated_at | 2015-04-27T15:43:58+00:00 |
| maintenance_reason | Need to add ram. |
| ... | ... |
| maintenance | True |
| ... | ... |
+------------------------+--------------------------------------+
To remove maintenance mode and clear any maintenance_reason use the
following command.
$ ironic node-set-maintenance $NODE_UUID off