Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ospdo qa comments changes #815

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

pinikomarov
Copy link
Contributor

@pinikomarov pinikomarov commented Feb 19, 2025

Jira: https://issues.redhat.com/browse/OSPRH-14176

Ongoing work .. more sections to be added

Copy link

openshift-ci bot commented Feb 19, 2025

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

Copy link

openshift-ci bot commented Feb 19, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign ciecierski for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@pinikomarov pinikomarov force-pushed the ospdo_adoption_docs_requested_changes branch 3 times, most recently from eaace95 to 631d680 Compare February 23, 2025 13:05
@@ -24,7 +24,7 @@ The following features are considered a Technology Preview and have not been tes
** AMD SEV
** Direct download from Rados Block Device (RBD)
** File-backed memory
** `Provider.yaml`
** Multipule data plane nodesets
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
** Multipule data plane nodesets
** Multiple data plane node sets

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

@@ -11,6 +11,9 @@ Planning information::
* Review the adoption-specific networking requirements. For more information, see xref:configuring-network-for-RHOSO-deployment_planning[Configuring the network for the RHOSO deployment].
* Review the adoption-specific storage requirements. For more information, see xref:storage-requirements_configuring-network[Storage requirements].
* Review how to customize your deployed control plane with the services that are required for your environment. For more information, see link:{customizing-rhoso}/index[{customizing-rhoso-t}].
ifeval::["{build}-{build_variant}" == "downstream-ospdo"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We've been using the following conditional statement for any ospdo content:
ifeval::["{build_variant}" == "ospdo"]

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok I'll change the rest too

Comment on lines 15 to 16
* Familiarize yourself with disconnected environments deployment.For more information, see link :https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html-single/deploying_an_overcloud_in_a_red_hat_openshift_container_platform_cluster_with_director_operator/index#proc_configuring-an-airgapped-environment_air-gapped-environment[Configuring an airgapped environment].
endif::[]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
* Familiarize yourself with disconnected environments deployment.For more information, see link :https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html-single/deploying_an_overcloud_in_a_red_hat_openshift_container_platform_cluster_with_director_operator/index#proc_configuring-an-airgapped-environment_air-gapped-environment[Configuring an airgapped environment].
endif::[]
* Familiarize yourself with a disconnected environment deployment. For more information, see link:https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html-single/deploying_an_overcloud_in_a_red_hat_openshift_container_platform_cluster_with_director_operator/index#proc_configuring-an-airgapped-environment_air-gapped-environment[Configuring an airgapped environment] in _Deploying an overcloud in a Red Hat OpenShift Container Platform cluster with director Operator_.
endif::[]

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

@@ -6,6 +6,9 @@ Familiarize yourself with the steps of the adoption process and the optional pos

.Main adoption process

ifeval::["{build}-{build_variant}" == "downstream-ospdo"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the ifeval should be ifeval::["{build_variant}" == "ospdo"]

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

@@ -6,6 +6,9 @@ Familiarize yourself with the steps of the adoption process and the optional pos

.Main adoption process

ifeval::["{build}-{build_variant}" == "downstream-ospdo"]
. xref:ospdo_scale_down_pre_database_adoption[Scaling down director Operator resources].
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
. xref:ospdo_scale_down_pre_database_adoption[Scaling down director Operator resources].
. xref:ospdo_scale_down_pre_database_adoption_adopt-control-plane[Scaling down director Operator resources].

I think this will render correctly in the preview. If not, the topic ID in the "Scaling down" file itself might need to be updated.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

id does render correctly

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The link in the preview is not working. It doesn't navigate to the "Scaling down director Operator resources" section.

Comment on lines 94 to 106
.. Reduce the roleCount for controller role in the OpenStackControlPlane CR to "1":
----
oc -n openstack patch OpenStackControlPlane overcloud --type json -p '[{"op": "replace", "path":"/spec/virtualMachineRoles/controller/roleCount", "value": 1}]'
.. Ensure that the openstackclient pod is running on the same OCP nodes as the remaining controller VM. If its not on the same node, then move it by cordoning off the two nodes that have been freed up for RHOSO and deleting the openstackclient pod so that it gets rescheduled on the OCP node that has the remaining controller VM. Once the pod has moved to the correct node, uncordon all nodes.
----
oc adm cordon $OSP18_NODE1
oc adm cordon $OSP18_NODE2
oc delete pod openstackclient
oc adm uncordon $OSP18_NODE1
oc adm uncordon $OSP18_NODE2
----
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
.. Reduce the roleCount for controller role in the OpenStackControlPlane CR to "1":
----
oc -n openstack patch OpenStackControlPlane overcloud --type json -p '[{"op": "replace", "path":"/spec/virtualMachineRoles/controller/roleCount", "value": 1}]'
.. Ensure that the openstackclient pod is running on the same OCP nodes as the remaining controller VM. If its not on the same node, then move it by cordoning off the two nodes that have been freed up for RHOSO and deleting the openstackclient pod so that it gets rescheduled on the OCP node that has the remaining controller VM. Once the pod has moved to the correct node, uncordon all nodes.
----
oc adm cordon $OSP18_NODE1
oc adm cordon $OSP18_NODE2
oc delete pod openstackclient
oc adm uncordon $OSP18_NODE1
oc adm uncordon $OSP18_NODE2
----
.. Reduce the `roleCount` for the Controller role in the `OpenStackControlPlane` CR to "1":
+
----
$ oc -n openstack patch OpenStackControlPlane overcloud --type json -p '[{"op": "replace", "path":"/spec/virtualMachineRoles/controller/roleCount", "value": 1}]'
.. Ensure that the `OpenStackClient` pod is running on the same {OpenShiftShort} nodes as the remaining Controller VM. If the `OpenStackClient` pod is not on the same node, then move it by cordoning off the two nodes that have been freed up for {rhos_acro}. Then you delete the `OpenStackClient` pod so that it gets rescheduled on the {OpenShiftShort} node that has the remaining Controller VM. After the pod is moved to the correct node, uncordon all the nodes:
+
----
$ oc adm cordon $OSP18_NODE1
$ oc adm cordon $OSP18_NODE2
$ oc delete pod openstackclient
$ oc adm uncordon $OSP18_NODE1
$ oc adm uncordon $OSP18_NODE2
----

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

@@ -62,3 +62,7 @@ consumed and are not available for the new control plane services until the adop

. Repeat this procedure for each isolated network and each host in the
configuration.

ifeval::["{build}-{build_variant}" == "downstream-ospdo"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment about the ifeval.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

Comment on lines 67 to 68
For Director Operator network configurations please consult the documentation.For more information, see link :https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html-single/deploying_an_overcloud_in_a_red_hat_openshift_container_platform_cluster_with_director_operator/index#assembly_creating-networks-with-director-operator[creating-networks-with-director-operator].
endif::[]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
For Director Operator network configurations please consult the documentation.For more information, see link :https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html-single/deploying_an_overcloud_in_a_red_hat_openshift_container_platform_cluster_with_director_operator/index#assembly_creating-networks-with-director-operator[creating-networks-with-director-operator].
endif::[]
For more information about director Operator network configurations, see link:https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html-single/deploying_an_overcloud_in_a_red_hat_openshift_container_platform_cluster_with_director_operator/index#assembly_creating-networks-with-director-operator[Creating networks with director Operator] in _Deploying an overcloud in a Red Hat OpenShift Container Platform cluster with director Operator_.
endif::[]

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

@@ -38,6 +38,10 @@ network_config:
+
Repeat this configuration for other networks that need to use different subnets for the new and existing parts of the deployment.

ifeval::["{build}-{build_variant}" == "downstream-ospdo"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment about ifeval.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

@@ -38,6 +38,10 @@ network_config:
+
Repeat this configuration for other networks that need to use different subnets for the new and existing parts of the deployment.

ifeval::["{build}-{build_variant}" == "downstream-ospdo"]
For Director Operator network configurations please consult the documentation.For more information, see link :https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html-single/deploying_an_overcloud_in_a_red_hat_openshift_container_platform_cluster_with_director_operator/index#assembly_creating-networks-with-director-operator[creating-networks-with-director-operator].
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
For Director Operator network configurations please consult the documentation.For more information, see link :https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html-single/deploying_an_overcloud_in_a_red_hat_openshift_container_platform_cluster_with_director_operator/index#assembly_creating-networks-with-director-operator[creating-networks-with-director-operator].
For more information about director Operator network configurations, see link:https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html-single/deploying_an_overcloud_in_a_red_hat_openshift_container_platform_cluster_with_director_operator/index#assembly_creating-networks-with-director-operator[Creating networks with director Operator] in _Deploying an overcloud in a Red Hat OpenShift Container Platform cluster with director Operator_.

@pinikomarov pinikomarov force-pushed the ospdo_adoption_docs_requested_changes branch 2 times, most recently from 3ea441e to 2167362 Compare March 2, 2025 10:45
@pinikomarov pinikomarov force-pushed the ospdo_adoption_docs_requested_changes branch from 2167362 to 6b6232e Compare March 2, 2025 10:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants