Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

clean up calico dhcp state on reboots (backport #952) #974

Merged
merged 1 commit into from
Feb 13, 2025

Conversation

mergify[bot]
Copy link

@mergify mergify bot commented Feb 12, 2025

Problem:

Under certain scenarios when a harvester node is hard rebooted the canal cni can leak ip addresses
https://docs.rke2.io/known_issues#canal-and-ip-exhaustion
Over time these may build up and result in ip address pool exhaustion for a node since.

The only option in such a scenario is to clean up un-used ip leases from /var/lib/cni/networks/k8s-pod-network

Solution:

PR introduces a minor change to add directives for cni reset in base image.
This ensures they are rolled out to existing systems as part of the upgrade cycle

Related Issue:
harvester/harvester#7471

Test plan:

  • Install harvester from current changes to a node
  • Once harvester is running hard reboot the node
  • Once harvester is up and running check the timestamp on the folder
/system/oem # ls -ld /var/lib/cni/networks/k8s-pod-network
drwxr-xr-x 2 root root 4096 Feb  7 02:40 /var/lib/cni/networks/k8s-pod-network
  • the folder would have been recreated after the reboot
/system/oem # last reboot
reboot   system boot  5.14.21-150500.5 Fri Feb  7 02:32   still running

wtmp begins Fri Feb  7 02:32:03 2025
```<hr>This is an automatic backport of pull request #952 done by [Mergify](https://mergify.com).

…led out to target systems as part of the upgrade cycle

(cherry picked from commit 83a51bc)
@bk201 bk201 merged commit 9cb7f3f into v1.4 Feb 13, 2025
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants