Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[cherry-pick] PWX-37480: avoid starting extra live-migraions for the same VM (#1560) #1561

Merged
merged 1 commit into from
Jun 1, 2024

Conversation

pureneelesh
Copy link
Contributor

What this PR does / why we need it:

When deciding whether a running virt-launcher pod needs to be evicted, we now also check if the VMI is active on the same node. If VMI says that it is running on a different node and there is no live-migration in progress, we skip the eviction for that pod since the pod should go into a completed state on its own.

Also, we now call shouldEvictPod only for the virt-launcher pods on the subset of nodes due for the update in the current Reconcile() iteration to avoid invoking k8s APIs unnecessarily.

When starting live-migration, we now have an additional check to see if the live-migration succeeded for the same VM in the same upgrade cycle. If yes, we don't start an additional live-migration and generate an event instead. We were already doing this for a failed migration.

Which issue(s) this PR fixes (optional)
PWX-37480

Special notes for your reviewer:

When deciding whether a running virt-launcher pod needs to be evicted, we now also check if the VMI
is active on the same node. If VMI says that it is running on a different node and
there is no live-migration in progress, we skip the eviction for that pod since the pod should go
into a completed state on its own.

Also, we now call shouldEvictPod only for the virt-launcher pods on the subset of nodes
due for the update in the current Reconcile() iteration to avoid invoking k8s APIs unnecessarily.

When starting live-migration, we now have an additional check to see if the live-migration
succeeded for the same VM in the same upgrade cycle. If yes, we don't start an additional
live-migration and generate an event instead. We were already doing this for a failed
migration.

Signed-off-by: Neelesh Thakur <neelesh.thakur@purestorage.com>
Copy link
Contributor

@pp511 pp511 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

Copy link

codecov bot commented Jun 1, 2024

Codecov Report

Attention: Patch coverage is 75.00000% with 11 lines in your changes are missing coverage. Please review.

Project coverage is 75.96%. Comparing base (d702d7e) to head (63669cd).

Files Patch % Lines
pkg/controller/storagecluster/kubevirt.go 73.17% 8 Missing and 3 partials ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #1561      +/-   ##
==========================================
- Coverage   75.97%   75.96%   -0.02%     
==========================================
  Files          77       77              
  Lines       20578    20609      +31     
==========================================
+ Hits        15634    15655      +21     
- Misses       3850     3857       +7     
- Partials     1094     1097       +3     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@pureneelesh pureneelesh merged commit 4c82a26 into master Jun 1, 2024
9 of 11 checks passed
@pureneelesh pureneelesh deleted the neelesh-cherry branch June 1, 2024 12:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants