The Bare Metal service provides physical hardware management features.
The Bare Metal service provides physical hardware as opposed to virtual machines. It also provides several reference drivers, which leverage common technologies like PXE and IPMI, to cover a wide range of hardware. The pluggable driver architecture also allows vendor-specific drivers to be added for improved performance or functionality not provided by reference drivers. The Bare Metal service makes physical servers as easy to provision as virtual machines in a cloud, which in turn will open up new avenues for enterprises and service providers.
The Bare Metal service is composed of the following components:
An admin-only RESTful API service, by which privileged users, such as operators and other services within the cloud control plane, may interact with the managed bare-metal servers.
A conductor service, which conducts all activity related to bare-metal deployments. Functionality is exposed via the API service. The Bare Metal service conductor and API service communicate via RPC.
Various drivers that support heterogeneous hardware, which enable features specific to unique hardware platforms and leverage divergent capabilities via a common API.
A message queue, which is a central hub for passing messages, such as RabbitMQ. It should use the same implementation as that of the Compute service.
A database for storing information about the resources. Among other things, this includes the state of the conductors, nodes (physical servers), and drivers.
When a user requests to boot an instance, the request is passed to the Compute service via the Compute service API and scheduler. The Compute service hands over this request to the Bare Metal service, where the request passes from the Bare Metal service API, to the conductor which will invoke a driver to successfully provision a physical server for the user.
PXE deploy process
Agent deploy process
Install the Bare Metal service.
Setup the Bare Metal driver in the compute node's nova.conf
file.
Setup TFTP folder and prepare PXE boot loader file.
Prepare the bare metal flavor.
Register the nodes with correct drivers.
Configure the driver information.
Register the ports information.
Use the openstack server create
command to
kick off the bare metal provision.
Check nodes' provision state and power state.
Multitenancy allows creating a dedicated project network that extends the
current Bare Metal (ironic) service capabilities of providing flat
networks. Multitenancy works in conjunction with Networking (neutron)
service to allow provisioning of a bare metal server onto the project network.
Therefore, multiple projects can get isolated instances after deployment.
Bare Metal service provides the local_link_connection
information to the
Networking service ML2 driver. The ML2 driver uses that information to plug the
specified port to the project network.
Field |
Description |
---|---|
|
Required. Identifies a switch and can be an LLDP-based MAC address or
an OpenFlow-based |
|
Required. Port ID on the switch, for example, Gig0/1. |
|
Optional. Used to distinguish different switch models or other vendor specific-identifier. |
To enable the Networking service ML2 driver, edit the
/etc/neutron/plugins/ml2/ml2_conf.ini
file:
Add the name of your ML2 driver.
Add the vendor ML2 plugin configuration options.
[ml2]
...
mechanism_drivers = my_mechanism_driver
[my_vendor]
param_1 = ...
param_2 = ...
param_3 = ...
For more details, see Networking service mechanism drivers.
After you configure the Networking service ML2 driver, configure Bare Metal service:
Edit the /etc/ironic/ironic.conf
for the ironic-conductor
service.
Set the network_interface
node field to a valid network driver that is
used to switch, clean, and provision networks.
[DEFAULT]
...
enabled_network_interfaces=flat,neutron
[neutron]
...
cleaning_network_uuid=$UUID
provisioning_network_uuid=$UUID
The cleaning_network_uuid
and provisioning_network_uuid
parameters are required for the neutron
network interface. If they are
not set, ironic-conductor
fails to start.
Set neutron
to use Networking service ML2 driver:
$ ironic node-create -n $NAME --network-interface neutron --driver agent_ipmitool
Create a port with appropriate local_link_connection
information. Set
the pxe_enabled
port attribute to True
to create network ports for
for the pxe_enabled
ports only:
$ ironic --ironic-api-version latest port-create -a $HW_MAC_ADDRESS \ -n $NODE_UUID -l switch_id=$SWITCH_MAC_ADDRESS \ -l switch_info=$SWITCH_HOSTNAME -l port_id=$SWITCH_PORT --pxe-enabled true
Sometimes /var/log/nova/nova-conductor.log
contains the following error:
NoValidHost: No valid host was found. There are not enough hosts available.
The message No valid host was found
means that the Compute service
scheduler could not find a bare metal node suitable for booting the new
instance.
This means there will be some mismatch between resources that the Compute service expects to find and resources that Bare Metal service advertised to the Compute service.
If you get this message, check the following:
Introspection should have succeeded for you before, or you should have
entered the required bare-metal node properties manually.
For each node in the ironic node-list
command, use:
$ ironic node-show <IRONIC-NODE-UUID>
and make sure that properties
JSON field has valid values for keys
cpus
, cpu_arch
, memory_mb
and local_gb
.
The flavor in the Compute service that you are using does not exceed the bare-metal node properties above for a required number of nodes. Use:
$ openstack flavor show FLAVOR
Make sure that enough nodes are in available
state according to the
ironic node-list
command. Nodes in manageable
state usually
mean they have failed introspection.
Make sure nodes you are going to deploy to are not in maintenance mode.
Use the ironic node-list
command to check. A node automatically
going to maintenance mode usually means the incorrect credentials for
this node. Check them and then remove maintenance mode:
$ ironic node-set-maintenance <IRONIC-NODE-UUID> off
It takes some time for nodes information to propagate from the Bare Metal
service to the Compute service after introspection. Our tooling usually
accounts for it, but if you did some steps manually there may be a period
of time when nodes are not available to the Compute service yet. Check that
the openstack hypervisor stats show
command correctly shows total
amount of resources in your system.