Set Up Deployer¶

Create SUSE Containerized OpenStack Workspace¶
All the deployment artifacts are stored in a workspace. By default, the workspace is a directory located in the user’s home directory on the Deployer. Set up your workspace with the following steps:
- Create a directory in your home directory that ends in -workspace.
- Export SOCOK8S_ENVNAME=<directory name prefix> to set your workspace.
- To change your workspace parent directory, export SOCOK8S_WORKSPACE_BASEDIR with the base directory where your workspace is located.
mkdir ~/socok8s-workspace
export SOCOK8S_ENVNAME=socok8s
export SOCOK8S_WORKSPACE_BASEDIR=~
Installing the SUSE Containerized OpenStack software¶
We recommend two ways of installing the SUSE Containerized OpenStack software.
(Recommended) Install with an ISO image including required dependencies:
- Download
openSUSE-Addon-socok8s-x86_64-Media.iso
from https://download.opensuse.org/repositories/Cloud:/socok8s/images/iso/ - sudo zypper addrepo –refresh <PATH_TO_ISO_IMAGE> socok8s-iso
- sudo zypper install socok8s (installs to /usr/share/socok8s)
Example: a. wget https://download.opensuse.org/repositories/Cloud:/socok8s/images/iso/openSUSE-Addon-socok8s-x86_64-Media.iso b. sudo zypper addrepo –refresh iso:///?iso=/home/stack/openSUSE-Addon-socok8s-x86_64-Media.iso socok8s-iso c. sudo zypper install socok8s
- Download
(For developers only) Clone the repository.
The following software must be manually installed on your Deployer using zypper or pip install:
- ansible >= 2.7.8
- git-core
- jq
- python3-virtualenv
- python3-jmespath
- python3-netaddr
- python3-openstacksdk
- python3-openstackclient
- python3-heatclient
- which
After the required packages are installed, clone socok8s GitHub repository. This repository uses submodules, which have additional code needed for the playbooks to work. Required dependencies must be installed manually. Intended for developers.
git clone --recursive https://github.com/SUSE-Cloud/socok8s.git
Fetch or update the tree of the submodules by running:
git submodule update --init --recursive
SSH Key Preparation¶
Create an SSH key on the Deployer node, and add the public key to each CaaS Platform worker node.
Note
- To generate the key, use ssh-keygen -t rsa
- To copy the ssh key to each node, use the ssh-copy-id command, for example: ssh-copy-id root@192.168.122.1
Test this by connecting to the node via SSH and executing a command with ‘sudo’. Neither operation should require a password.
Passwordless sudo¶
If installing as a non-root user, you will need to give your user passwordless sudo on the Deployer.
sudo visudo
Add the following.
<username> ALL=(ALL) NOPASSWD: ALL
Add the above line after “#includedir /etc/sudoers.d”. replace <username> with your username.
Configure Ansible¶
Use ARA (recommended)¶
Ansible Run Analysis (ARA) makes Ansible runs easier to visualize, understand, and troubleshoot. To use ARA:
- Install ARA and its required dependencies:
pip install ara[server]
. - Set the ARA environment variable before running run.sh:
export USE_ARA='True'
To set up ARA permanently on the Deployer, create an Ansible configuration file loading ARA plugins:
python3 -m ara.setup.ansible | tee ~/.ansible.cfg
For more details on the ARA web interface, see https://ara.readthedocs.io/en/stable/webserver.html.
Ansible Logging¶
Enable Ansible logging with the following steps:
Create an Ansible configuration file in the $HOME directory, for example,
.ansible.cfg
. This configuration file can be used for other Ansible configurations.Add your
log_path
to.ansible.cfg
. Use a log path and log filename that fit your needs, for example:[defaults] log_path=$HOME/.ansible/ansible.log
Enable Pipelining (recommended)¶
You can improve SSH connections by enabling pipelining:
cat << EOF >> ~/.ansible.cfg
[ssh_connection]
pipelining = True
EOF