Skip to content

Commit

Permalink
Run multi-conductor ironic with BMO and fake-ipa
Browse files Browse the repository at this point in the history
  • Loading branch information
mquhuy committed Aug 11, 2023
1 parent 814a25a commit b007217
Show file tree
Hide file tree
Showing 4 changed files with 139 additions and 0 deletions.
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
#!/bin/bash
set -e
trap 'trap - SIGTERM && kill -- -'$$'' SIGINT SIGTERM EXIT
__dir__=$(realpath "$(dirname "$0")")
# shellcheck disable=SC1091
. ./config.sh
# This is temporarily required since https://review.opendev.org/c/openstack/sushy-tools/+/875366 has not been merged.
./build-sushy-tools-image.sh
sudo ./vm-setup.sh
./configure-minikube.sh
sudo ./handle-images.sh
./generate_unique_nodes.sh
./start_containers.sh
./start-minikube.sh
./install-ironic.sh
./install-bmo.sh
python create_nodes.py
14 changes: 14 additions & 0 deletions Support/Multitenancy/Multiple-Ironic-conductors/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -121,6 +121,20 @@ This config means that there will be, in total, 1000 (fake) nodes created, of wh

__NOTE__: To clean up everything, you can run the `./cleanup.sh` script.

# Multiple ironics setup with BMO

In the version 2 of this experiment, we explore the possibility of adding BMO to the process. The steps are pretty straight-forward: compared to the ones we've already had, there're a couple of changes:
- We need to populate BMO manifest for each of the nodes we have.
- We no longer use the ironic client (`baremetal`) to connect to ironic, instead, we apply BMH manifests and let BMO handle the heavy works.
- This time, the nodes will get to `available` state.

## Steps
We will still use all steps we listed in the previous section to configure ironic, sushy-tools and fake-ipa. The only exception is that as we no longer contact `ironic` directly, we no longer run the `create_and_inspect_nodes.py` script. Instead, we install BMO with `install-bmo.sh` script, and then run the `create_nodes.py` script, which will generate the manifest for each of the BMHs, apply it and wait for the bmh to be available. (The use of python is, again, to speed things up. Also due to limitation in resources, we don't want to apply all the BMH manifests at once).

Now, if you open another terminal and run `kubectl -n metal3 get BMH --watch`, you will be able to observe how the BMHs are being created and inspected, a process that can be as well seen from running `watch baremetal node list`. Depending on how many nodes you choose and how fast your environment is, after a while, most/all of them should exist in ironic with state `available`. The states will also be available in the BMH objects.

Just like before, all of the steps can be ran at once by running the `./Init-environment-v2.sh` script. This script also respects configuration in `config.sh`.

# Requirements

This study was conducted on a VM with the following specs:
Expand Down
82 changes: 82 additions & 0 deletions Support/Multitenancy/Multiple-Ironic-conductors/create_nodes.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
import json
import subprocess
import time
import random
import os
from multiprocessing import Pool

with open("nodes.json") as f:
nodes = json.load(f)

def generate_random_mac():
# Generate a random MAC address
mac = [random.randint(0x00, 0xff) for _ in range(6)]
# Set the locally administered address bit (2nd least significant bit of the 1st byte) to 1
mac[0] |= 0x02
# Format the MAC address
mac_address = ':'.join('%02x' % b for b in mac)
return mac_address

def query_k8s_obj(namespace, obj_type, obj_name):
try:
rsp = subprocess.check_output(["kubectl", "-n", namespace, "get", obj_type, obj_name, "-o", "json"])
return json.loads(rsp.decode())
except Exception:
return {}

def create_node(node):
uuid = node["uuid"]
name = node["name"]
namespace = "metal3"
port = 8001 + (int(name.strip("fake")) - 1) % int(os.environ.get("N_SUSHY", 10))
random_mac = generate_random_mac()
manifest = f"""---
apiVersion: v1
kind: Secret
metadata:
name: {name}-bmc-secret
namespace: {namespace}
labels:
environment.metal3.io: baremetal
type: Opaque
data:
username: YWRtaW4=
password: cGFzc3dvcmQ=
---
apiVersion: metal3.io/v1alpha1
kind: BareMetalHost
metadata:
name: {name}
uid: {uuid}
namespace: {namespace}
spec:
online: true
bmc:
address: redfish+http://192.168.111.1:{port}/redfish/v1/Systems/{uuid}
credentialsName: {name}-bmc-secret
bootMACAddress: {random_mac}
bootMode: legacy
"""
manifest_file = f"bmc-{name}.yaml"
with open(manifest_file, "w") as f:
f.write(manifest)
print(f"Generated manifest for node {name}")
subprocess.run(
["kubectl", "apply", "-f", manifest_file],
stdout=subprocess.DEVNULL,
stderr=subprocess.DEVNULL)
time.sleep(5)
while True:
status = query_k8s_obj(namespace, "bmh", name)
if status == {}:
time.sleep(5)
continue
state = status.get("status", {}).get("provisioning", {}).get("state")
if state == "available":
print(f"BMH {name} provisioned")
return
time.sleep(5)

if __name__ == "__main__":
with Pool(30) as p:
conductors = p.map(create_node, nodes)
26 changes: 26 additions & 0 deletions Support/Multitenancy/Multiple-Ironic-conductors/install-bmo.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
#!/bin/bash
set -e

kubectl create ns metal3

BMOPATH=${BMOPATH:-$HOME/baremetal-operator}

rm -rf ${BMOPATH}

git clone https://github.com/Nordix/baremetal-operator.git ${BMOPATH}

cat << EOF >"${BMOPATH}/config/default/ironic.env"
HTTP_PORT=6180
PROVISIONING_INTERFACE=ironicendpoint
DHCP_RANGE=172.22.0.10,172.22.0.100
DEPLOY_KERNEL_URL=http://172.22.0.2:6180/images/ironic-python-agent.kernel
DEPLOY_RAMDISK_URL=http://172.22.0.2:6180/images/ironic-python-agent.initramfs
IRONIC_ENDPOINT=https://172.22.0.2:6385/v1/
IRONIC_INSPECTOR_ENDPOINT=https://172.22.0.2:5050/v1/
CACHEURL=http://172.22.0.1/images
IRONIC_FAST_TRACK=true
EOF

kustomize build ${BMOPATH}/config/tls | kubectl apply -f -

kubectl -n baremetal-operator-system wait --for=condition=available deployment/baremetal-operator-controller-manager --timeout=300s

0 comments on commit b007217

Please sign in to comment.