Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[PWX-35604] Update KubeSchedulerConfiguration to api v1 for k8s version 1.25+ #1395

Merged
merged 4 commits into from
Jan 25, 2024

Conversation

olavangad-px
Copy link
Contributor

@olavangad-px olavangad-px commented Jan 19, 2024

Signed-off-by: Omkar Lavangad olavangad@purestorage.com

What this PR does / why we need it:
KubeSchedulerConfiguration v1beta3 is deprecated in v1.26 and is removed in v1.29. We must migrate to KubeSchedulerConfiguration to v1.

Which issue(s) this PR fixes (optional)
Closes #PWX-35604

Special notes for your reviewer:

  • Policy cfg is deprecated from K8 V 1.23 https://kubernetes.io/docs/reference/scheduling/policies/
  • kube-scheduler configuration is required with api version v1 or v1beta3.
  • api version v1beta3 is supported from 1.23 onwards. It is deprecated in 1.26 and removed in 1.29
  • api version v1 is GA'ed in 1.25
  • So we are using Policy cfg for version < 1.23, kubeSchedulerConfiguration v1beta3 for 1.23 <= version < 1.25 and kubeSchedulerConfiguration v1 for version >= 1.25 (maybe we can instead use v1 for >= 1.29 to save testing effort)
  • Only difference between v1beta3 and v1 is The scheduler plugin SelectorSpread is removed, instead, use the PodTopologySpread plugin (enabled by default) to achieve similar behavior. and we are not using SelectorSpread, so the configMaps created are similar in both the cases other than the library being used.
  • Ref PR that was used to add support for KubeSchedulerConfiguration in Operator PWX-27975: Add kube scheduler configuration for K8 version 1.23+ #862

Testing Details
Have tested that appropriate configMap/policy resource is created in k8s v 1.21, 1.23 and 1.25 and stork & stork-scheduler come up properly
Stork Extender jenkins job has successfully passed with the new operator changes https://jenkins.pwx.dev.purestorage.com/job/Users/job/omkar/job/extenderMinio/19/

Copy link

codecov bot commented Jan 19, 2024

Codecov Report

Attention: 9 lines in your changes are missing coverage. Please review.

Comparison is base (6ffb09b) 75.85% compared to head (2994e09) 75.86%.

Files Patch % Lines
pkg/controller/storagecluster/stork.go 89.53% 6 Missing and 3 partials ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #1395      +/-   ##
==========================================
+ Coverage   75.85%   75.86%   +0.01%     
==========================================
  Files          66       66              
  Lines       18755    18788      +33     
==========================================
+ Hits        14227    14254      +27     
- Misses       3519     3523       +4     
- Partials     1009     1011       +2     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@olavangad-px olavangad-px marked this pull request as ready for review January 22, 2024 04:38
@piyush-nimbalkar piyush-nimbalkar requested a review from a team January 22, 2024 19:05
Copy link
Contributor

@piyush-nimbalkar piyush-nimbalkar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is okay to transition to the v1 config starting 1.25.
We should just validate the transition works fine in existing cluster. Let's verify that firstly the config is updated correctly. Secondly, this doesn't affect stork scheduler in anyway by scheduling/re-scheduling new and existing workloads.

@olavangad-px olavangad-px changed the title Update KubeSchedulerConfiguration to api v1 for k8s version 1.25+ [PWX-35604] Update KubeSchedulerConfiguration to api v1 for k8s version 1.25+ Jan 24, 2024
@olavangad-px
Copy link
Contributor Author

stork-config map created in a cluster with k8s version 1.27.2

[root@ip-10-13-198-222 ~]#  kubectl -n kube-system get cm stork-config -oyaml
apiVersion: v1
data:
  stork-config.yaml: |
    apiVersion: kubescheduler.config.k8s.io/v1
    clientConnection:
      acceptContentTypes: ""
      burst: 100
      contentType: application/vnd.kubernetes.protobuf
      kubeconfig: ""
      qps: 50
    enableContentionProfiling: true
    enableProfiling: true
    extenders:
    - filterVerb: filter
      httpTimeout: 5m0s
      prioritizeVerb: prioritize
      urlPrefix: http://stork-service.kube-system:8099
      weight: 5
    kind: KubeSchedulerConfiguration
    leaderElection:
      leaderElect: true
      leaseDuration: 15s
      renewDeadline: 10s
      resourceLock: leases
      resourceName: stork-scheduler
      resourceNamespace: kube-system
      retryPeriod: 2s
    parallelism: 16
    percentageOfNodesToScore: 0
    podInitialBackoffSeconds: 1
    podMaxBackoffSeconds: 10
    profiles:
    - pluginConfig:
      - args:
          apiVersion: kubescheduler.config.k8s.io/v1
          kind: DefaultPreemptionArgs
          minCandidateNodesAbsolute: 100
          minCandidateNodesPercentage: 10
        name: DefaultPreemption
      - args:
          apiVersion: kubescheduler.config.k8s.io/v1
          hardPodAffinityWeight: 1
          kind: InterPodAffinityArgs
        name: InterPodAffinity
      - args:
          apiVersion: kubescheduler.config.k8s.io/v1
          kind: NodeAffinityArgs
        name: NodeAffinity
      - args:
          apiVersion: kubescheduler.config.k8s.io/v1
          kind: NodeResourcesBalancedAllocationArgs
          resources:
          - name: cpu
            weight: 1
          - name: memory
            weight: 1
        name: NodeResourcesBalancedAllocation
      - args:
          apiVersion: kubescheduler.config.k8s.io/v1
          kind: NodeResourcesFitArgs
          scoringStrategy:
            resources:
            - name: cpu
              weight: 1
            - name: memory
              weight: 1
            type: LeastAllocated
        name: NodeResourcesFit
      - args:
          apiVersion: kubescheduler.config.k8s.io/v1
          defaultingType: System
          kind: PodTopologySpreadArgs
        name: PodTopologySpread
      - args:
          apiVersion: kubescheduler.config.k8s.io/v1
          bindTimeoutSeconds: 600
          kind: VolumeBindingArgs
        name: VolumeBinding
      plugins:
        bind: {}
        filter: {}
        multiPoint:
          enabled:
          - name: PrioritySort
          - name: NodeUnschedulable
          - name: NodeName
          - name: TaintToleration
            weight: 3
          - name: NodeAffinity
            weight: 2
          - name: NodePorts
          - name: NodeResourcesFit
            weight: 1
          - name: VolumeRestrictions
          - name: EBSLimits
          - name: GCEPDLimits
          - name: NodeVolumeLimits
          - name: AzureDiskLimits
          - name: VolumeBinding
          - name: VolumeZone
          - name: PodTopologySpread
            weight: 2
          - name: InterPodAffinity
            weight: 2
          - name: DefaultPreemption
          - name: NodeResourcesBalancedAllocation
            weight: 1
          - name: ImageLocality
            weight: 1
          - name: DefaultBinder
        permit: {}
        postBind: {}
        postFilter: {}
        preBind: {}
        preFilter: {}
        preScore: {}
        queueSort: {}
        reserve: {}
        score: {}
      schedulerName: stork
kind: ConfigMap

@olavangad-px
Copy link
Contributor Author

stork-config created in cluster with k8s version 1.23

[root@ip-10-13-195-166 ~]# kubectl -n kube-system get cm stork-config -oyaml
apiVersion: v1
data:
  stork-config.yaml: |
    apiVersion: kubescheduler.config.k8s.io/v1beta3
    clientConnection:
      acceptContentTypes: ""
      burst: 100
      contentType: application/vnd.kubernetes.protobuf
      kubeconfig: ""
      qps: 50
    enableContentionProfiling: true
    enableProfiling: true
    extenders:
    - filterVerb: filter
      httpTimeout: 5m0s
      prioritizeVerb: prioritize
      urlPrefix: http://stork-service.kube-system:8099
      weight: 5
    kind: KubeSchedulerConfiguration
    leaderElection:
      leaderElect: true
      leaseDuration: 15s
      renewDeadline: 10s
      resourceLock: leases
      resourceName: stork-scheduler
      resourceNamespace: kube-system
      retryPeriod: 2s
    parallelism: 16
    percentageOfNodesToScore: 0
    podInitialBackoffSeconds: 1
    podMaxBackoffSeconds: 10
    profiles:
    - pluginConfig:
      - args:
          apiVersion: kubescheduler.config.k8s.io/v1beta3
          kind: DefaultPreemptionArgs
          minCandidateNodesAbsolute: 100
          minCandidateNodesPercentage: 10
        name: DefaultPreemption
      - args:
          apiVersion: kubescheduler.config.k8s.io/v1beta3
          hardPodAffinityWeight: 1
          kind: InterPodAffinityArgs
        name: InterPodAffinity
      - args:
          apiVersion: kubescheduler.config.k8s.io/v1beta3
          kind: NodeAffinityArgs
        name: NodeAffinity
      - args:
          apiVersion: kubescheduler.config.k8s.io/v1beta3
          kind: NodeResourcesBalancedAllocationArgs
          resources:
          - name: cpu
            weight: 1
          - name: memory
            weight: 1
        name: NodeResourcesBalancedAllocation
      - args:
          apiVersion: kubescheduler.config.k8s.io/v1beta3
          kind: NodeResourcesFitArgs
          scoringStrategy:
            resources:
            - name: cpu
              weight: 1
            - name: memory
              weight: 1
            type: LeastAllocated
        name: NodeResourcesFit
      - args:
          apiVersion: kubescheduler.config.k8s.io/v1beta3
          defaultingType: System
          kind: PodTopologySpreadArgs
        name: PodTopologySpread
      - args:
          apiVersion: kubescheduler.config.k8s.io/v1beta3
          bindTimeoutSeconds: 600
          kind: VolumeBindingArgs
        name: VolumeBinding
      plugins:
        bind: {}
        filter: {}
        multiPoint:
          enabled:
          - name: PrioritySort
          - name: NodeUnschedulable
          - name: NodeName
          - name: TaintToleration
            weight: 3
          - name: NodeAffinity
            weight: 2
          - name: NodePorts
          - name: NodeResourcesFit
            weight: 1
          - name: VolumeRestrictions
          - name: EBSLimits
          - name: GCEPDLimits
          - name: NodeVolumeLimits
          - name: AzureDiskLimits
          - name: VolumeBinding
          - name: VolumeZone
          - name: PodTopologySpread
            weight: 2
          - name: InterPodAffinity
            weight: 2
          - name: DefaultPreemption
          - name: NodeResourcesBalancedAllocation
            weight: 1
          - name: ImageLocality
            weight: 1
          - name: DefaultBinder
        permit: {}
        postBind: {}
        postFilter: {}
        preBind: {}
        preFilter: {}
        preScore: {}
        queueSort: {}
        reserve: {}
        score: {}
      schedulerName: stork
kind: ConfigMap

@olavangad-px olavangad-px merged commit 272fd98 into master Jan 25, 2024
7 checks passed
olavangad-px added a commit that referenced this pull request Apr 30, 2024
…on 1.25+ (#1395)

* Update KubeSchedulerConfiguration to api v1 for k8s version 1.25+

* Addressing comments
olavangad-px added a commit that referenced this pull request May 2, 2024
* [PWX-35604] Update KubeSchedulerConfiguration to api v1 for k8s version 1.25+ (#1395)

* Update KubeSchedulerConfiguration to api v1 for k8s version 1.25+
* adding changes missed due to change in order of mine and zoran's commits compared to 23.10.4
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants